00:00:00.001 Started by upstream project "autotest-nightly" build number 3914 00:00:00.001 originally caused by: 00:00:00.001 Started by user Latecki, Karol 00:00:00.002 Started by upstream project "autotest-nightly" build number 3912 00:00:00.002 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3911 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.004 Started by upstream project "autotest-nightly" build number 3909 00:00:00.004 originally caused by: 00:00:00.004 Started by user Latecki, Karol 00:00:00.005 Started by upstream project "autotest-nightly" build number 3908 00:00:00.005 originally caused by: 00:00:00.005 Started by user Latecki, Karol 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.186 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.249 Using shallow fetch with depth 1 00:00:00.249 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.249 > git --version # timeout=10 00:00:00.291 > git --version # 'git version 2.39.2' 00:00:00.291 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.321 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.321 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:07.504 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.518 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.530 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:07.530 > git config core.sparsecheckout # timeout=10 00:00:07.542 > git read-tree -mu HEAD # timeout=10 00:00:07.558 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:07.579 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:07.579 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:07.671 [Pipeline] Start of Pipeline 00:00:07.686 [Pipeline] library 00:00:07.687 Loading library shm_lib@master 00:00:07.687 Library shm_lib@master is cached. Copying from home. 00:00:07.705 [Pipeline] node 00:00:07.716 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.717 [Pipeline] { 00:00:07.730 [Pipeline] catchError 00:00:07.732 [Pipeline] { 00:00:07.746 [Pipeline] wrap 00:00:07.756 [Pipeline] { 00:00:07.762 [Pipeline] stage 00:00:07.763 [Pipeline] { (Prologue) 00:00:07.963 [Pipeline] sh 00:00:08.245 + logger -p user.info -t JENKINS-CI 00:00:08.260 [Pipeline] echo 00:00:08.261 Node: GP8 00:00:08.267 [Pipeline] sh 00:00:08.563 [Pipeline] setCustomBuildProperty 00:00:08.575 [Pipeline] echo 00:00:08.577 Cleanup processes 00:00:08.582 [Pipeline] sh 00:00:08.868 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.868 2072177 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.883 [Pipeline] sh 00:00:09.170 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.170 ++ grep -v 'sudo pgrep' 00:00:09.170 ++ awk '{print $1}' 00:00:09.170 + sudo kill -9 00:00:09.170 + true 00:00:09.186 [Pipeline] cleanWs 00:00:09.195 [WS-CLEANUP] Deleting project workspace... 00:00:09.196 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.203 [WS-CLEANUP] done 00:00:09.208 [Pipeline] setCustomBuildProperty 00:00:09.227 [Pipeline] sh 00:00:09.513 + sudo git config --global --replace-all safe.directory '*' 00:00:09.621 [Pipeline] httpRequest 00:00:09.673 [Pipeline] echo 00:00:09.675 Sorcerer 10.211.164.101 is alive 00:00:09.686 [Pipeline] httpRequest 00:00:09.691 HttpMethod: GET 00:00:09.692 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:09.693 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:09.708 Response Code: HTTP/1.1 200 OK 00:00:09.708 Success: Status code 200 is in the accepted range: 200,404 00:00:09.708 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:15.476 [Pipeline] sh 00:00:15.764 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:16.042 [Pipeline] httpRequest 00:00:16.075 [Pipeline] echo 00:00:16.077 Sorcerer 10.211.164.101 is alive 00:00:16.086 [Pipeline] httpRequest 00:00:16.091 HttpMethod: GET 00:00:16.092 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:16.093 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:16.107 Response Code: HTTP/1.1 200 OK 00:00:16.107 Success: Status code 200 is in the accepted range: 200,404 00:00:16.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:49.244 [Pipeline] sh 00:00:49.533 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:56.154 [Pipeline] sh 00:00:56.440 + git -C spdk log --oneline -n5 00:00:56.440 f7b31b2b9 log: declare g_deprecation_epoch static 00:00:56.440 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:00:56.440 3731556bd lvol: declare g_lvol_if static 00:00:56.440 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:00:56.440 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:00:56.450 [Pipeline] } 00:00:56.465 [Pipeline] // stage 00:00:56.474 [Pipeline] stage 00:00:56.477 [Pipeline] { (Prepare) 00:00:56.490 [Pipeline] writeFile 00:00:56.503 [Pipeline] sh 00:00:56.781 + logger -p user.info -t JENKINS-CI 00:00:56.796 [Pipeline] sh 00:00:57.081 + logger -p user.info -t JENKINS-CI 00:00:57.094 [Pipeline] sh 00:00:57.378 + cat autorun-spdk.conf 00:00:57.378 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.378 SPDK_TEST_NVMF=1 00:00:57.378 SPDK_TEST_NVME_CLI=1 00:00:57.378 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.378 SPDK_TEST_NVMF_NICS=e810 00:00:57.378 SPDK_RUN_ASAN=1 00:00:57.378 SPDK_RUN_UBSAN=1 00:00:57.378 NET_TYPE=phy 00:00:57.386 RUN_NIGHTLY=1 00:00:57.391 [Pipeline] readFile 00:00:57.425 [Pipeline] withEnv 00:00:57.428 [Pipeline] { 00:00:57.444 [Pipeline] sh 00:00:57.740 + set -ex 00:00:57.740 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:57.740 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.740 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.740 ++ SPDK_TEST_NVMF=1 00:00:57.740 ++ SPDK_TEST_NVME_CLI=1 00:00:57.740 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.740 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.740 ++ SPDK_RUN_ASAN=1 00:00:57.740 ++ SPDK_RUN_UBSAN=1 00:00:57.740 ++ NET_TYPE=phy 00:00:57.740 ++ RUN_NIGHTLY=1 00:00:57.740 + case $SPDK_TEST_NVMF_NICS in 00:00:57.740 + DRIVERS=ice 00:00:57.740 + [[ tcp == \r\d\m\a ]] 00:00:57.740 + [[ -n ice ]] 00:00:57.740 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:57.740 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:57.740 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:57.740 rmmod: ERROR: Module irdma is not currently loaded 00:00:57.740 rmmod: ERROR: Module i40iw is not currently loaded 00:00:57.740 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:57.740 + true 00:00:57.740 + for D in $DRIVERS 00:00:57.740 + sudo modprobe ice 00:00:57.740 + exit 0 00:00:57.756 [Pipeline] } 00:00:57.774 [Pipeline] // withEnv 00:00:57.780 [Pipeline] } 00:00:57.799 [Pipeline] // stage 00:00:57.812 [Pipeline] catchError 00:00:57.814 [Pipeline] { 00:00:57.830 [Pipeline] timeout 00:00:57.831 Timeout set to expire in 50 min 00:00:57.833 [Pipeline] { 00:00:57.849 [Pipeline] stage 00:00:57.851 [Pipeline] { (Tests) 00:00:57.867 [Pipeline] sh 00:00:58.154 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.154 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.154 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.154 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:58.154 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.154 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.154 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:58.154 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.154 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.154 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.154 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:58.154 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.154 + source /etc/os-release 00:00:58.154 ++ NAME='Fedora Linux' 00:00:58.154 ++ VERSION='38 (Cloud Edition)' 00:00:58.154 ++ ID=fedora 00:00:58.154 ++ VERSION_ID=38 00:00:58.154 ++ VERSION_CODENAME= 00:00:58.154 ++ PLATFORM_ID=platform:f38 00:00:58.154 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:58.154 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.154 ++ LOGO=fedora-logo-icon 00:00:58.154 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:58.154 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.154 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:58.154 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.154 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.154 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.154 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:58.154 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.154 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:58.154 ++ SUPPORT_END=2024-05-14 00:00:58.154 ++ VARIANT='Cloud Edition' 00:00:58.154 ++ VARIANT_ID=cloud 00:00:58.154 + uname -a 00:00:58.154 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:58.154 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.063 Hugepages 00:01:00.063 node hugesize free / total 00:01:00.063 node0 1048576kB 0 / 0 00:01:00.063 node0 2048kB 0 / 0 00:01:00.063 node1 1048576kB 0 / 0 00:01:00.063 node1 2048kB 0 / 0 00:01:00.063 00:01:00.063 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.063 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:00.063 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:00.063 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:00.063 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:00.063 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:00.063 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:00.063 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:00.063 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:00.063 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:00.063 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:00.063 + rm -f /tmp/spdk-ld-path 00:01:00.063 + source autorun-spdk.conf 00:01:00.063 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.063 ++ SPDK_TEST_NVMF=1 00:01:00.063 ++ SPDK_TEST_NVME_CLI=1 00:01:00.063 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.063 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.063 ++ SPDK_RUN_ASAN=1 00:01:00.063 ++ SPDK_RUN_UBSAN=1 00:01:00.063 ++ NET_TYPE=phy 00:01:00.063 ++ RUN_NIGHTLY=1 00:01:00.063 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:00.063 + [[ -n '' ]] 00:01:00.063 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.063 + for M in /var/spdk/build-*-manifest.txt 00:01:00.063 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:00.063 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.063 + for M in /var/spdk/build-*-manifest.txt 00:01:00.063 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:00.063 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.063 ++ uname 00:01:00.063 + [[ Linux == \L\i\n\u\x ]] 00:01:00.063 + sudo dmesg -T 00:01:00.063 + sudo dmesg --clear 00:01:00.323 + dmesg_pid=2072864 00:01:00.323 + [[ Fedora Linux == FreeBSD ]] 00:01:00.323 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.323 + sudo dmesg -Tw 00:01:00.323 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.323 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:00.323 + [[ -x /usr/src/fio-static/fio ]] 00:01:00.323 + export FIO_BIN=/usr/src/fio-static/fio 00:01:00.323 + FIO_BIN=/usr/src/fio-static/fio 00:01:00.323 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:00.323 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:00.323 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:00.323 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.323 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.323 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:00.323 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.323 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.323 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.323 Test configuration: 00:01:00.323 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.323 SPDK_TEST_NVMF=1 00:01:00.323 SPDK_TEST_NVME_CLI=1 00:01:00.323 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.323 SPDK_TEST_NVMF_NICS=e810 00:01:00.323 SPDK_RUN_ASAN=1 00:01:00.323 SPDK_RUN_UBSAN=1 00:01:00.323 NET_TYPE=phy 00:01:00.323 RUN_NIGHTLY=1 08:14:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:00.323 08:14:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:00.323 08:14:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:00.323 08:14:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:00.324 08:14:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.324 08:14:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.324 08:14:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.324 08:14:12 -- paths/export.sh@5 -- $ export PATH 00:01:00.324 08:14:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.324 08:14:12 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:00.324 08:14:12 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:00.324 08:14:12 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721715252.XXXXXX 00:01:00.324 08:14:12 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721715252.I7v3rq 00:01:00.324 08:14:12 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:00.324 08:14:12 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:00.324 08:14:12 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:00.324 08:14:12 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:00.324 08:14:12 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:00.324 08:14:12 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:00.324 08:14:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:00.324 08:14:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.324 08:14:12 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:00.324 08:14:12 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:00.324 08:14:12 -- pm/common@17 -- $ local monitor 00:01:00.324 08:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.324 08:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.324 08:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.324 08:14:12 -- pm/common@21 -- $ date +%s 00:01:00.324 08:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.324 08:14:12 -- pm/common@21 -- $ date +%s 00:01:00.324 08:14:12 -- pm/common@25 -- $ sleep 1 00:01:00.324 08:14:12 -- pm/common@21 -- $ date +%s 00:01:00.324 08:14:12 -- pm/common@21 -- $ date +%s 00:01:00.324 08:14:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715252 00:01:00.324 08:14:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715252 00:01:00.324 08:14:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715252 00:01:00.324 08:14:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715252 00:01:00.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715252_collect-vmstat.pm.log 00:01:00.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715252_collect-cpu-load.pm.log 00:01:00.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715252_collect-cpu-temp.pm.log 00:01:00.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715252_collect-bmc-pm.bmc.pm.log 00:01:01.261 08:14:13 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:01.261 08:14:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:01.261 08:14:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:01.261 08:14:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.261 08:14:13 -- spdk/autobuild.sh@16 -- $ date -u 00:01:01.261 Tue Jul 23 06:14:13 AM UTC 2024 00:01:01.261 08:14:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:01.261 v24.09-pre-297-gf7b31b2b9 00:01:01.261 08:14:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:01.261 08:14:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:01.261 08:14:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:01.261 08:14:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:01.261 08:14:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.521 ************************************ 00:01:01.521 START TEST asan 00:01:01.521 ************************************ 00:01:01.521 08:14:13 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:01.521 using asan 00:01:01.521 00:01:01.521 real 0m0.000s 00:01:01.521 user 0m0.000s 00:01:01.521 sys 0m0.000s 00:01:01.521 08:14:13 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:01.521 08:14:13 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:01.521 ************************************ 00:01:01.521 END TEST asan 00:01:01.521 ************************************ 00:01:01.521 08:14:13 -- common/autotest_common.sh@1142 -- $ return 0 00:01:01.521 08:14:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:01.521 08:14:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:01.521 08:14:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:01.521 08:14:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:01.521 08:14:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.521 ************************************ 00:01:01.521 START TEST ubsan 00:01:01.521 ************************************ 00:01:01.521 08:14:13 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:01.521 using ubsan 00:01:01.521 00:01:01.521 real 0m0.000s 00:01:01.521 user 0m0.000s 00:01:01.521 sys 0m0.000s 00:01:01.521 08:14:13 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:01.521 08:14:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:01.521 ************************************ 00:01:01.521 END TEST ubsan 00:01:01.521 ************************************ 00:01:01.521 08:14:13 -- common/autotest_common.sh@1142 -- $ return 0 00:01:01.521 08:14:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:01.521 08:14:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:01.521 08:14:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:01.521 08:14:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:01.521 08:14:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.521 08:14:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:01.521 08:14:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:01.521 08:14:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:01.521 08:14:13 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:01.521 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:01.521 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:02.090 Using 'verbs' RDMA provider 00:01:21.582 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.481 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.758 Creating mk/config.mk...done. 00:01:36.758 Creating mk/cc.flags.mk...done. 00:01:36.758 Type 'make' to build. 00:01:36.758 08:14:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:36.758 08:14:49 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:36.758 08:14:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.758 08:14:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.028 ************************************ 00:01:37.028 START TEST make 00:01:37.028 ************************************ 00:01:37.028 08:14:49 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:37.286 make[1]: Nothing to be done for 'all'. 00:01:52.208 The Meson build system 00:01:52.208 Version: 1.3.1 00:01:52.208 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:52.208 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:52.208 Build type: native build 00:01:52.208 Program cat found: YES (/usr/bin/cat) 00:01:52.208 Project name: DPDK 00:01:52.208 Project version: 24.03.0 00:01:52.208 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.208 C linker for the host machine: cc ld.bfd 2.39-16 00:01:52.208 Host machine cpu family: x86_64 00:01:52.208 Host machine cpu: x86_64 00:01:52.208 Message: ## Building in Developer Mode ## 00:01:52.208 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.208 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:52.208 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.208 Program python3 found: YES (/usr/bin/python3) 00:01:52.208 Program cat found: YES (/usr/bin/cat) 00:01:52.208 Compiler for C supports arguments -march=native: YES 00:01:52.208 Checking for size of "void *" : 8 00:01:52.208 Checking for size of "void *" : 8 (cached) 00:01:52.208 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:52.208 Library m found: YES 00:01:52.208 Library numa found: YES 00:01:52.208 Has header "numaif.h" : YES 00:01:52.208 Library fdt found: NO 00:01:52.208 Library execinfo found: NO 00:01:52.208 Has header "execinfo.h" : YES 00:01:52.208 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.208 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.208 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.208 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.208 Run-time dependency openssl found: YES 3.0.9 00:01:52.208 Run-time dependency libpcap found: YES 1.10.4 00:01:52.208 Has header "pcap.h" with dependency libpcap: YES 00:01:52.208 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.208 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.208 Compiler for C supports arguments -Wformat: YES 00:01:52.208 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.208 Compiler for C supports arguments -Wformat-security: NO 00:01:52.208 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.208 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.208 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.208 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.208 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.208 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.208 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.208 Compiler for C supports arguments -Wundef: YES 00:01:52.208 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.208 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.208 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.208 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.208 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.208 Program objdump found: YES (/usr/bin/objdump) 00:01:52.208 Compiler for C supports arguments -mavx512f: YES 00:01:52.209 Checking if "AVX512 checking" compiles: YES 00:01:52.209 Fetching value of define "__SSE4_2__" : 1 00:01:52.209 Fetching value of define "__AES__" : 1 00:01:52.209 Fetching value of define "__AVX__" : 1 00:01:52.209 Fetching value of define "__AVX2__" : (undefined) 00:01:52.209 Fetching value of define "__AVX512BW__" : (undefined) 00:01:52.209 Fetching value of define "__AVX512CD__" : (undefined) 00:01:52.209 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:52.209 Fetching value of define "__AVX512F__" : (undefined) 00:01:52.209 Fetching value of define "__AVX512VL__" : (undefined) 00:01:52.209 Fetching value of define "__PCLMUL__" : 1 00:01:52.209 Fetching value of define "__RDRND__" : 1 00:01:52.209 Fetching value of define "__RDSEED__" : (undefined) 00:01:52.209 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:52.209 Fetching value of define "__znver1__" : (undefined) 00:01:52.209 Fetching value of define "__znver2__" : (undefined) 00:01:52.209 Fetching value of define "__znver3__" : (undefined) 00:01:52.209 Fetching value of define "__znver4__" : (undefined) 00:01:52.209 Library asan found: YES 00:01:52.209 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.209 Message: lib/log: Defining dependency "log" 00:01:52.209 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.209 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.209 Library rt found: YES 00:01:52.209 Checking for function "getentropy" : NO 00:01:52.209 Message: lib/eal: Defining dependency "eal" 00:01:52.209 Message: lib/ring: Defining dependency "ring" 00:01:52.209 Message: lib/rcu: Defining dependency "rcu" 00:01:52.209 Message: lib/mempool: Defining dependency "mempool" 00:01:52.209 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.209 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.209 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.209 Compiler for C supports arguments -mpclmul: YES 00:01:52.209 Compiler for C supports arguments -maes: YES 00:01:52.209 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.209 Compiler for C supports arguments -mavx512bw: YES 00:01:52.209 Compiler for C supports arguments -mavx512dq: YES 00:01:52.209 Compiler for C supports arguments -mavx512vl: YES 00:01:52.209 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.209 Compiler for C supports arguments -mavx2: YES 00:01:52.209 Compiler for C supports arguments -mavx: YES 00:01:52.209 Message: lib/net: Defining dependency "net" 00:01:52.209 Message: lib/meter: Defining dependency "meter" 00:01:52.209 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.209 Message: lib/pci: Defining dependency "pci" 00:01:52.209 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.209 Message: lib/hash: Defining dependency "hash" 00:01:52.209 Message: lib/timer: Defining dependency "timer" 00:01:52.209 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.209 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.209 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.209 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.209 Message: lib/power: Defining dependency "power" 00:01:52.209 Message: lib/reorder: Defining dependency "reorder" 00:01:52.209 Message: lib/security: Defining dependency "security" 00:01:52.209 Has header "linux/userfaultfd.h" : YES 00:01:52.209 Has header "linux/vduse.h" : YES 00:01:52.209 Message: lib/vhost: Defining dependency "vhost" 00:01:52.209 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:52.209 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.209 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:52.209 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:52.209 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:52.209 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:52.209 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:52.209 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:52.209 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:52.209 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:52.209 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.209 Configuring doxy-api-html.conf using configuration 00:01:52.209 Configuring doxy-api-man.conf using configuration 00:01:52.209 Program mandb found: YES (/usr/bin/mandb) 00:01:52.209 Program sphinx-build found: NO 00:01:52.209 Configuring rte_build_config.h using configuration 00:01:52.209 Message: 00:01:52.209 ================= 00:01:52.209 Applications Enabled 00:01:52.209 ================= 00:01:52.209 00:01:52.209 apps: 00:01:52.209 00:01:52.209 00:01:52.209 Message: 00:01:52.209 ================= 00:01:52.209 Libraries Enabled 00:01:52.209 ================= 00:01:52.209 00:01:52.209 libs: 00:01:52.209 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:52.209 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:52.209 cryptodev, dmadev, power, reorder, security, vhost, 00:01:52.209 00:01:52.209 Message: 00:01:52.209 =============== 00:01:52.209 Drivers Enabled 00:01:52.209 =============== 00:01:52.209 00:01:52.209 common: 00:01:52.209 00:01:52.209 bus: 00:01:52.209 pci, vdev, 00:01:52.209 mempool: 00:01:52.209 ring, 00:01:52.209 dma: 00:01:52.209 00:01:52.209 net: 00:01:52.209 00:01:52.209 crypto: 00:01:52.209 00:01:52.209 compress: 00:01:52.209 00:01:52.209 vdpa: 00:01:52.209 00:01:52.209 00:01:52.209 Message: 00:01:52.209 ================= 00:01:52.209 Content Skipped 00:01:52.209 ================= 00:01:52.209 00:01:52.209 apps: 00:01:52.209 dumpcap: explicitly disabled via build config 00:01:52.209 graph: explicitly disabled via build config 00:01:52.209 pdump: explicitly disabled via build config 00:01:52.209 proc-info: explicitly disabled via build config 00:01:52.209 test-acl: explicitly disabled via build config 00:01:52.209 test-bbdev: explicitly disabled via build config 00:01:52.209 test-cmdline: explicitly disabled via build config 00:01:52.209 test-compress-perf: explicitly disabled via build config 00:01:52.209 test-crypto-perf: explicitly disabled via build config 00:01:52.209 test-dma-perf: explicitly disabled via build config 00:01:52.209 test-eventdev: explicitly disabled via build config 00:01:52.209 test-fib: explicitly disabled via build config 00:01:52.209 test-flow-perf: explicitly disabled via build config 00:01:52.209 test-gpudev: explicitly disabled via build config 00:01:52.209 test-mldev: explicitly disabled via build config 00:01:52.209 test-pipeline: explicitly disabled via build config 00:01:52.209 test-pmd: explicitly disabled via build config 00:01:52.209 test-regex: explicitly disabled via build config 00:01:52.209 test-sad: explicitly disabled via build config 00:01:52.209 test-security-perf: explicitly disabled via build config 00:01:52.209 00:01:52.209 libs: 00:01:52.209 argparse: explicitly disabled via build config 00:01:52.209 metrics: explicitly disabled via build config 00:01:52.209 acl: explicitly disabled via build config 00:01:52.209 bbdev: explicitly disabled via build config 00:01:52.209 bitratestats: explicitly disabled via build config 00:01:52.209 bpf: explicitly disabled via build config 00:01:52.209 cfgfile: explicitly disabled via build config 00:01:52.209 distributor: explicitly disabled via build config 00:01:52.209 efd: explicitly disabled via build config 00:01:52.209 eventdev: explicitly disabled via build config 00:01:52.209 dispatcher: explicitly disabled via build config 00:01:52.209 gpudev: explicitly disabled via build config 00:01:52.209 gro: explicitly disabled via build config 00:01:52.209 gso: explicitly disabled via build config 00:01:52.209 ip_frag: explicitly disabled via build config 00:01:52.209 jobstats: explicitly disabled via build config 00:01:52.209 latencystats: explicitly disabled via build config 00:01:52.209 lpm: explicitly disabled via build config 00:01:52.209 member: explicitly disabled via build config 00:01:52.209 pcapng: explicitly disabled via build config 00:01:52.209 rawdev: explicitly disabled via build config 00:01:52.209 regexdev: explicitly disabled via build config 00:01:52.209 mldev: explicitly disabled via build config 00:01:52.209 rib: explicitly disabled via build config 00:01:52.209 sched: explicitly disabled via build config 00:01:52.209 stack: explicitly disabled via build config 00:01:52.209 ipsec: explicitly disabled via build config 00:01:52.209 pdcp: explicitly disabled via build config 00:01:52.209 fib: explicitly disabled via build config 00:01:52.209 port: explicitly disabled via build config 00:01:52.209 pdump: explicitly disabled via build config 00:01:52.209 table: explicitly disabled via build config 00:01:52.209 pipeline: explicitly disabled via build config 00:01:52.209 graph: explicitly disabled via build config 00:01:52.209 node: explicitly disabled via build config 00:01:52.209 00:01:52.209 drivers: 00:01:52.209 common/cpt: not in enabled drivers build config 00:01:52.209 common/dpaax: not in enabled drivers build config 00:01:52.209 common/iavf: not in enabled drivers build config 00:01:52.209 common/idpf: not in enabled drivers build config 00:01:52.209 common/ionic: not in enabled drivers build config 00:01:52.209 common/mvep: not in enabled drivers build config 00:01:52.209 common/octeontx: not in enabled drivers build config 00:01:52.209 bus/auxiliary: not in enabled drivers build config 00:01:52.209 bus/cdx: not in enabled drivers build config 00:01:52.209 bus/dpaa: not in enabled drivers build config 00:01:52.209 bus/fslmc: not in enabled drivers build config 00:01:52.209 bus/ifpga: not in enabled drivers build config 00:01:52.209 bus/platform: not in enabled drivers build config 00:01:52.210 bus/uacce: not in enabled drivers build config 00:01:52.210 bus/vmbus: not in enabled drivers build config 00:01:52.210 common/cnxk: not in enabled drivers build config 00:01:52.210 common/mlx5: not in enabled drivers build config 00:01:52.210 common/nfp: not in enabled drivers build config 00:01:52.210 common/nitrox: not in enabled drivers build config 00:01:52.210 common/qat: not in enabled drivers build config 00:01:52.210 common/sfc_efx: not in enabled drivers build config 00:01:52.210 mempool/bucket: not in enabled drivers build config 00:01:52.210 mempool/cnxk: not in enabled drivers build config 00:01:52.210 mempool/dpaa: not in enabled drivers build config 00:01:52.210 mempool/dpaa2: not in enabled drivers build config 00:01:52.210 mempool/octeontx: not in enabled drivers build config 00:01:52.210 mempool/stack: not in enabled drivers build config 00:01:52.210 dma/cnxk: not in enabled drivers build config 00:01:52.210 dma/dpaa: not in enabled drivers build config 00:01:52.210 dma/dpaa2: not in enabled drivers build config 00:01:52.210 dma/hisilicon: not in enabled drivers build config 00:01:52.210 dma/idxd: not in enabled drivers build config 00:01:52.210 dma/ioat: not in enabled drivers build config 00:01:52.210 dma/skeleton: not in enabled drivers build config 00:01:52.210 net/af_packet: not in enabled drivers build config 00:01:52.210 net/af_xdp: not in enabled drivers build config 00:01:52.210 net/ark: not in enabled drivers build config 00:01:52.210 net/atlantic: not in enabled drivers build config 00:01:52.210 net/avp: not in enabled drivers build config 00:01:52.210 net/axgbe: not in enabled drivers build config 00:01:52.210 net/bnx2x: not in enabled drivers build config 00:01:52.210 net/bnxt: not in enabled drivers build config 00:01:52.210 net/bonding: not in enabled drivers build config 00:01:52.210 net/cnxk: not in enabled drivers build config 00:01:52.210 net/cpfl: not in enabled drivers build config 00:01:52.210 net/cxgbe: not in enabled drivers build config 00:01:52.210 net/dpaa: not in enabled drivers build config 00:01:52.210 net/dpaa2: not in enabled drivers build config 00:01:52.210 net/e1000: not in enabled drivers build config 00:01:52.210 net/ena: not in enabled drivers build config 00:01:52.210 net/enetc: not in enabled drivers build config 00:01:52.210 net/enetfec: not in enabled drivers build config 00:01:52.210 net/enic: not in enabled drivers build config 00:01:52.210 net/failsafe: not in enabled drivers build config 00:01:52.210 net/fm10k: not in enabled drivers build config 00:01:52.210 net/gve: not in enabled drivers build config 00:01:52.210 net/hinic: not in enabled drivers build config 00:01:52.210 net/hns3: not in enabled drivers build config 00:01:52.210 net/i40e: not in enabled drivers build config 00:01:52.210 net/iavf: not in enabled drivers build config 00:01:52.210 net/ice: not in enabled drivers build config 00:01:52.210 net/idpf: not in enabled drivers build config 00:01:52.210 net/igc: not in enabled drivers build config 00:01:52.210 net/ionic: not in enabled drivers build config 00:01:52.210 net/ipn3ke: not in enabled drivers build config 00:01:52.210 net/ixgbe: not in enabled drivers build config 00:01:52.210 net/mana: not in enabled drivers build config 00:01:52.210 net/memif: not in enabled drivers build config 00:01:52.210 net/mlx4: not in enabled drivers build config 00:01:52.210 net/mlx5: not in enabled drivers build config 00:01:52.210 net/mvneta: not in enabled drivers build config 00:01:52.210 net/mvpp2: not in enabled drivers build config 00:01:52.210 net/netvsc: not in enabled drivers build config 00:01:52.210 net/nfb: not in enabled drivers build config 00:01:52.210 net/nfp: not in enabled drivers build config 00:01:52.210 net/ngbe: not in enabled drivers build config 00:01:52.210 net/null: not in enabled drivers build config 00:01:52.210 net/octeontx: not in enabled drivers build config 00:01:52.210 net/octeon_ep: not in enabled drivers build config 00:01:52.210 net/pcap: not in enabled drivers build config 00:01:52.210 net/pfe: not in enabled drivers build config 00:01:52.210 net/qede: not in enabled drivers build config 00:01:52.210 net/ring: not in enabled drivers build config 00:01:52.210 net/sfc: not in enabled drivers build config 00:01:52.210 net/softnic: not in enabled drivers build config 00:01:52.210 net/tap: not in enabled drivers build config 00:01:52.210 net/thunderx: not in enabled drivers build config 00:01:52.210 net/txgbe: not in enabled drivers build config 00:01:52.210 net/vdev_netvsc: not in enabled drivers build config 00:01:52.210 net/vhost: not in enabled drivers build config 00:01:52.210 net/virtio: not in enabled drivers build config 00:01:52.210 net/vmxnet3: not in enabled drivers build config 00:01:52.210 raw/*: missing internal dependency, "rawdev" 00:01:52.210 crypto/armv8: not in enabled drivers build config 00:01:52.210 crypto/bcmfs: not in enabled drivers build config 00:01:52.210 crypto/caam_jr: not in enabled drivers build config 00:01:52.210 crypto/ccp: not in enabled drivers build config 00:01:52.210 crypto/cnxk: not in enabled drivers build config 00:01:52.210 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.210 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.210 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.210 crypto/mlx5: not in enabled drivers build config 00:01:52.210 crypto/mvsam: not in enabled drivers build config 00:01:52.210 crypto/nitrox: not in enabled drivers build config 00:01:52.210 crypto/null: not in enabled drivers build config 00:01:52.210 crypto/octeontx: not in enabled drivers build config 00:01:52.210 crypto/openssl: not in enabled drivers build config 00:01:52.210 crypto/scheduler: not in enabled drivers build config 00:01:52.210 crypto/uadk: not in enabled drivers build config 00:01:52.210 crypto/virtio: not in enabled drivers build config 00:01:52.210 compress/isal: not in enabled drivers build config 00:01:52.210 compress/mlx5: not in enabled drivers build config 00:01:52.210 compress/nitrox: not in enabled drivers build config 00:01:52.210 compress/octeontx: not in enabled drivers build config 00:01:52.210 compress/zlib: not in enabled drivers build config 00:01:52.210 regex/*: missing internal dependency, "regexdev" 00:01:52.210 ml/*: missing internal dependency, "mldev" 00:01:52.210 vdpa/ifc: not in enabled drivers build config 00:01:52.210 vdpa/mlx5: not in enabled drivers build config 00:01:52.210 vdpa/nfp: not in enabled drivers build config 00:01:52.210 vdpa/sfc: not in enabled drivers build config 00:01:52.210 event/*: missing internal dependency, "eventdev" 00:01:52.210 baseband/*: missing internal dependency, "bbdev" 00:01:52.210 gpu/*: missing internal dependency, "gpudev" 00:01:52.210 00:01:52.210 00:01:52.210 Build targets in project: 85 00:01:52.210 00:01:52.210 DPDK 24.03.0 00:01:52.210 00:01:52.210 User defined options 00:01:52.210 buildtype : debug 00:01:52.210 default_library : shared 00:01:52.210 libdir : lib 00:01:52.210 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:52.210 b_sanitize : address 00:01:52.210 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:52.210 c_link_args : 00:01:52.210 cpu_instruction_set: native 00:01:52.210 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:01:52.210 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:01:52.210 enable_docs : false 00:01:52.210 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:52.210 enable_kmods : false 00:01:52.210 max_lcores : 128 00:01:52.210 tests : false 00:01:52.210 00:01:52.210 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.210 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.210 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.210 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.210 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.210 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.210 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.210 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.210 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.210 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.210 [9/268] Linking static target lib/librte_kvargs.a 00:01:52.210 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.210 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.210 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.210 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.210 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.210 [15/268] Linking static target lib/librte_log.a 00:01:52.210 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.210 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.210 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.210 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.210 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.210 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.210 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.471 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.471 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.471 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.471 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.471 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.471 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.471 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.471 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.471 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.471 [32/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.471 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.471 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.471 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.472 [36/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.472 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.472 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.472 [39/268] Linking static target lib/librte_telemetry.a 00:01:52.472 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.472 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.472 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.472 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.472 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.472 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.472 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.472 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.736 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.736 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.736 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.736 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.736 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.736 [53/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.736 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.736 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.736 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.736 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.736 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.736 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.736 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.736 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.736 [62/268] Linking target lib/librte_log.so.24.1 00:01:52.736 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.999 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.999 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.999 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.264 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.264 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.264 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.264 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.264 [71/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:53.264 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.264 [73/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.264 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.264 [75/268] Linking static target lib/librte_pci.a 00:01:53.264 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.527 [77/268] Linking target lib/librte_kvargs.so.24.1 00:01:53.527 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.527 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.527 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.527 [81/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.527 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.527 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.527 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.527 [85/268] Linking static target lib/librte_ring.a 00:01:53.527 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.527 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.527 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.527 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.527 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.527 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.527 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.527 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.527 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.527 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.788 [96/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:53.788 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.788 [98/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:53.788 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.788 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.788 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.788 [102/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.788 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.788 [104/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.054 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.054 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.054 [107/268] Linking static target lib/librte_meter.a 00:01:54.054 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.054 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:54.054 [110/268] Linking target lib/librte_telemetry.so.24.1 00:01:54.054 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.054 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:54.054 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.054 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.054 [115/268] Linking static target lib/librte_rcu.a 00:01:54.054 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.054 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:54.054 [118/268] Linking static target lib/librte_mempool.a 00:01:54.054 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:54.054 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:54.054 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.054 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:54.054 [123/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:54.328 [124/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.328 [125/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:54.328 [126/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:54.328 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:54.590 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:54.590 [129/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:54.590 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:54.590 [131/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.590 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.590 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:54.590 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:54.590 [135/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.853 [136/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.853 [137/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.853 [138/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.853 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:54.853 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:54.853 [141/268] Linking static target lib/librte_cmdline.a 00:01:54.853 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:54.853 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.853 [144/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.853 [145/268] Linking static target lib/librte_timer.a 00:01:54.853 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.853 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.853 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.853 [149/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.853 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:54.853 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.115 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.115 [153/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.115 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.115 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.115 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.115 [157/268] Linking static target lib/librte_eal.a 00:01:55.375 [158/268] Linking static target lib/librte_dmadev.a 00:01:55.375 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.375 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.375 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.375 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.375 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.635 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.635 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.635 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.635 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.635 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.635 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.635 [170/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.635 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.635 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.894 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.894 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.894 [175/268] Linking static target lib/librte_hash.a 00:01:55.894 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.894 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.894 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.894 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.894 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.894 [181/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.894 [182/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.894 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.894 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.894 [185/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.894 [186/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.894 [187/268] Linking static target drivers/librte_bus_vdev.a 00:01:56.153 [188/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.153 [189/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.153 [190/268] Linking static target lib/librte_net.a 00:01:56.153 [191/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.153 [192/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.153 [193/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.153 [194/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.153 [195/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.153 [196/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.153 [197/268] Linking static target drivers/librte_bus_pci.a 00:01:56.153 [198/268] Linking static target lib/librte_compressdev.a 00:01:56.153 [199/268] Linking static target lib/librte_power.a 00:01:56.412 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.412 [201/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.412 [202/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.412 [203/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.412 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:56.412 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.412 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.670 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.670 [208/268] Linking static target drivers/librte_mempool_ring.a 00:01:56.670 [209/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.670 [210/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.670 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.670 [212/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.670 [213/268] Linking static target lib/librte_reorder.a 00:01:56.670 [214/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.238 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.804 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.804 [217/268] Linking static target lib/librte_security.a 00:01:58.739 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.739 [219/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.739 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:58.739 [221/268] Linking static target lib/librte_mbuf.a 00:01:59.674 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.932 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.932 [224/268] Linking static target lib/librte_cryptodev.a 00:02:00.500 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.762 [226/268] Linking static target lib/librte_ethdev.a 00:02:01.341 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.634 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.634 [229/268] Linking target lib/librte_eal.so.24.1 00:02:04.634 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:04.634 [231/268] Linking target lib/librte_ring.so.24.1 00:02:04.634 [232/268] Linking target lib/librte_timer.so.24.1 00:02:04.634 [233/268] Linking target lib/librte_meter.so.24.1 00:02:04.634 [234/268] Linking target lib/librte_pci.so.24.1 00:02:04.634 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:04.634 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:04.895 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:04.895 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:04.895 [239/268] Linking target lib/librte_rcu.so.24.1 00:02:04.895 [240/268] Linking target lib/librte_mempool.so.24.1 00:02:04.895 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:04.895 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:04.895 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:04.895 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.154 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.154 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.154 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.154 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:05.414 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.674 [250/268] Linking target lib/librte_net.so.24.1 00:02:05.674 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:05.674 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:05.674 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:05.674 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.933 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.933 [256/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:05.933 [257/268] Linking target lib/librte_hash.so.24.1 00:02:05.933 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:05.933 [259/268] Linking target lib/librte_security.so.24.1 00:02:06.193 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:08.735 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.735 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:08.735 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:08.996 [264/268] Linking target lib/librte_power.so.24.1 00:03:05.267 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:05.267 [266/268] Linking static target lib/librte_vhost.a 00:03:05.267 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.267 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:05.267 INFO: autodetecting backend as ninja 00:03:05.267 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:05.838 CC lib/ut_mock/mock.o 00:03:05.838 CC lib/log/log.o 00:03:05.838 CC lib/log/log_deprecated.o 00:03:05.838 CC lib/log/log_flags.o 00:03:05.838 CC lib/ut/ut.o 00:03:06.415 LIB libspdk_log.a 00:03:06.415 LIB libspdk_ut_mock.a 00:03:06.415 LIB libspdk_ut.a 00:03:06.415 SO libspdk_log.so.7.0 00:03:06.415 SO libspdk_ut.so.2.0 00:03:06.415 SO libspdk_ut_mock.so.6.0 00:03:06.415 SYMLINK libspdk_ut.so 00:03:06.415 SYMLINK libspdk_log.so 00:03:06.415 SYMLINK libspdk_ut_mock.so 00:03:06.675 CC lib/util/base64.o 00:03:06.675 CC lib/util/bit_array.o 00:03:06.675 CC lib/util/cpuset.o 00:03:06.675 CC lib/util/crc32.o 00:03:06.675 CC lib/util/crc16.o 00:03:06.675 CC lib/util/crc32_ieee.o 00:03:06.675 CC lib/util/crc32c.o 00:03:06.675 CC lib/util/dif.o 00:03:06.675 CC lib/util/crc64.o 00:03:06.675 CC lib/util/file.o 00:03:06.675 CC lib/util/fd.o 00:03:06.675 CC lib/util/hexlify.o 00:03:06.675 CC lib/ioat/ioat.o 00:03:06.675 CC lib/util/fd_group.o 00:03:06.675 CC lib/util/iov.o 00:03:06.675 CC lib/util/math.o 00:03:06.675 CC lib/util/net.o 00:03:06.675 CC lib/util/pipe.o 00:03:06.675 CC lib/util/strerror_tls.o 00:03:06.675 CC lib/util/string.o 00:03:06.675 CC lib/util/zipf.o 00:03:06.675 CC lib/util/xor.o 00:03:06.675 CC lib/util/uuid.o 00:03:06.675 CC lib/dma/dma.o 00:03:06.675 CXX lib/trace_parser/trace.o 00:03:06.675 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.675 CC lib/vfio_user/host/vfio_user.o 00:03:06.936 LIB libspdk_dma.a 00:03:06.936 SO libspdk_dma.so.4.0 00:03:06.936 SYMLINK libspdk_dma.so 00:03:07.214 LIB libspdk_ioat.a 00:03:07.214 SO libspdk_ioat.so.7.0 00:03:07.214 SYMLINK libspdk_ioat.so 00:03:07.521 LIB libspdk_vfio_user.a 00:03:07.521 SO libspdk_vfio_user.so.5.0 00:03:07.521 LIB libspdk_util.a 00:03:07.521 SYMLINK libspdk_vfio_user.so 00:03:07.521 SO libspdk_util.so.10.0 00:03:07.781 SYMLINK libspdk_util.so 00:03:08.041 CC lib/conf/conf.o 00:03:08.041 CC lib/json/json_parse.o 00:03:08.041 CC lib/rdma_provider/common.o 00:03:08.041 CC lib/json/json_util.o 00:03:08.041 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.041 CC lib/json/json_write.o 00:03:08.041 CC lib/vmd/vmd.o 00:03:08.041 CC lib/vmd/led.o 00:03:08.041 CC lib/env_dpdk/env.o 00:03:08.041 CC lib/env_dpdk/memory.o 00:03:08.041 CC lib/env_dpdk/pci.o 00:03:08.041 CC lib/env_dpdk/init.o 00:03:08.041 CC lib/env_dpdk/pci_ioat.o 00:03:08.041 CC lib/env_dpdk/threads.o 00:03:08.041 CC lib/env_dpdk/pci_virtio.o 00:03:08.041 CC lib/env_dpdk/pci_idxd.o 00:03:08.041 CC lib/env_dpdk/pci_vmd.o 00:03:08.041 CC lib/env_dpdk/sigbus_handler.o 00:03:08.041 CC lib/env_dpdk/pci_event.o 00:03:08.041 CC lib/env_dpdk/pci_dpdk.o 00:03:08.041 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.041 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.041 CC lib/rdma_utils/rdma_utils.o 00:03:08.041 CC lib/idxd/idxd.o 00:03:08.041 CC lib/idxd/idxd_user.o 00:03:08.041 CC lib/idxd/idxd_kernel.o 00:03:08.302 LIB libspdk_rdma_provider.a 00:03:08.302 SO libspdk_rdma_provider.so.6.0 00:03:08.302 LIB libspdk_conf.a 00:03:08.562 SO libspdk_conf.so.6.0 00:03:08.562 SYMLINK libspdk_rdma_provider.so 00:03:08.562 LIB libspdk_trace_parser.a 00:03:08.562 SYMLINK libspdk_conf.so 00:03:08.562 SO libspdk_trace_parser.so.5.0 00:03:08.562 LIB libspdk_rdma_utils.a 00:03:08.562 LIB libspdk_json.a 00:03:08.562 SO libspdk_rdma_utils.so.1.0 00:03:08.562 SYMLINK libspdk_trace_parser.so 00:03:08.562 SO libspdk_json.so.6.0 00:03:08.822 SYMLINK libspdk_json.so 00:03:08.822 SYMLINK libspdk_rdma_utils.so 00:03:08.822 LIB libspdk_vmd.a 00:03:09.082 SO libspdk_vmd.so.6.0 00:03:09.082 CC lib/jsonrpc/jsonrpc_server.o 00:03:09.082 CC lib/jsonrpc/jsonrpc_client.o 00:03:09.082 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:09.082 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:09.082 SYMLINK libspdk_vmd.so 00:03:09.663 LIB libspdk_idxd.a 00:03:09.663 SO libspdk_idxd.so.12.0 00:03:09.663 LIB libspdk_jsonrpc.a 00:03:09.663 SYMLINK libspdk_idxd.so 00:03:09.663 SO libspdk_jsonrpc.so.6.0 00:03:09.924 SYMLINK libspdk_jsonrpc.so 00:03:10.185 CC lib/rpc/rpc.o 00:03:10.444 LIB libspdk_rpc.a 00:03:10.444 SO libspdk_rpc.so.6.0 00:03:10.704 SYMLINK libspdk_rpc.so 00:03:10.971 CC lib/notify/notify.o 00:03:10.971 CC lib/notify/notify_rpc.o 00:03:10.971 CC lib/keyring/keyring_rpc.o 00:03:10.971 CC lib/keyring/keyring.o 00:03:10.971 CC lib/trace/trace.o 00:03:10.971 CC lib/trace/trace_flags.o 00:03:10.971 CC lib/trace/trace_rpc.o 00:03:11.231 LIB libspdk_notify.a 00:03:11.231 SO libspdk_notify.so.6.0 00:03:11.231 SYMLINK libspdk_notify.so 00:03:11.231 LIB libspdk_trace.a 00:03:11.231 SO libspdk_trace.so.10.0 00:03:11.491 LIB libspdk_keyring.a 00:03:11.491 SYMLINK libspdk_trace.so 00:03:11.491 SO libspdk_keyring.so.1.0 00:03:11.491 SYMLINK libspdk_keyring.so 00:03:11.751 CC lib/sock/sock_rpc.o 00:03:11.751 CC lib/sock/sock.o 00:03:11.751 CC lib/thread/thread.o 00:03:11.751 CC lib/thread/iobuf.o 00:03:12.690 LIB libspdk_env_dpdk.a 00:03:12.690 LIB libspdk_sock.a 00:03:12.690 SO libspdk_env_dpdk.so.15.0 00:03:12.690 SO libspdk_sock.so.10.0 00:03:12.690 SYMLINK libspdk_sock.so 00:03:12.951 SYMLINK libspdk_env_dpdk.so 00:03:13.210 CC lib/nvme/nvme_ctrlr.o 00:03:13.210 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.210 CC lib/nvme/nvme_fabric.o 00:03:13.210 CC lib/nvme/nvme_ns_cmd.o 00:03:13.210 CC lib/nvme/nvme_ns.o 00:03:13.210 CC lib/nvme/nvme_pcie_common.o 00:03:13.210 CC lib/nvme/nvme_pcie.o 00:03:13.210 CC lib/nvme/nvme_qpair.o 00:03:13.210 CC lib/nvme/nvme.o 00:03:13.210 CC lib/nvme/nvme_quirks.o 00:03:13.210 CC lib/nvme/nvme_transport.o 00:03:13.210 CC lib/nvme/nvme_discovery.o 00:03:13.210 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:13.210 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:13.210 CC lib/nvme/nvme_tcp.o 00:03:13.210 CC lib/nvme/nvme_opal.o 00:03:13.210 CC lib/nvme/nvme_io_msg.o 00:03:13.210 CC lib/nvme/nvme_poll_group.o 00:03:13.210 CC lib/nvme/nvme_zns.o 00:03:13.210 CC lib/nvme/nvme_stubs.o 00:03:13.210 CC lib/nvme/nvme_auth.o 00:03:13.210 CC lib/nvme/nvme_cuse.o 00:03:13.210 CC lib/nvme/nvme_rdma.o 00:03:14.591 LIB libspdk_thread.a 00:03:14.591 SO libspdk_thread.so.10.1 00:03:14.591 SYMLINK libspdk_thread.so 00:03:14.850 CC lib/virtio/virtio.o 00:03:14.850 CC lib/accel/accel.o 00:03:14.850 CC lib/virtio/virtio_vhost_user.o 00:03:14.850 CC lib/accel/accel_sw.o 00:03:14.850 CC lib/accel/accel_rpc.o 00:03:14.850 CC lib/virtio/virtio_vfio_user.o 00:03:14.850 CC lib/virtio/virtio_pci.o 00:03:14.850 CC lib/init/json_config.o 00:03:14.850 CC lib/init/subsystem.o 00:03:14.850 CC lib/init/subsystem_rpc.o 00:03:14.850 CC lib/init/rpc.o 00:03:14.850 CC lib/blob/blobstore.o 00:03:14.850 CC lib/blob/request.o 00:03:14.850 CC lib/blob/blob_bs_dev.o 00:03:14.850 CC lib/blob/zeroes.o 00:03:15.108 LIB libspdk_init.a 00:03:15.108 SO libspdk_init.so.5.0 00:03:15.368 SYMLINK libspdk_init.so 00:03:15.368 LIB libspdk_virtio.a 00:03:15.368 SO libspdk_virtio.so.7.0 00:03:15.628 CC lib/event/app.o 00:03:15.628 CC lib/event/reactor.o 00:03:15.628 CC lib/event/log_rpc.o 00:03:15.628 CC lib/event/scheduler_static.o 00:03:15.628 CC lib/event/app_rpc.o 00:03:15.628 SYMLINK libspdk_virtio.so 00:03:17.010 LIB libspdk_event.a 00:03:17.010 SO libspdk_event.so.14.0 00:03:17.270 SYMLINK libspdk_event.so 00:03:17.529 LIB libspdk_accel.a 00:03:17.529 SO libspdk_accel.so.16.0 00:03:17.791 SYMLINK libspdk_accel.so 00:03:18.050 CC lib/bdev/bdev.o 00:03:18.050 CC lib/bdev/bdev_zone.o 00:03:18.050 CC lib/bdev/bdev_rpc.o 00:03:18.050 CC lib/bdev/part.o 00:03:18.050 CC lib/bdev/scsi_nvme.o 00:03:18.618 LIB libspdk_nvme.a 00:03:18.877 SO libspdk_nvme.so.13.1 00:03:19.815 SYMLINK libspdk_nvme.so 00:03:25.127 LIB libspdk_blob.a 00:03:25.127 SO libspdk_blob.so.11.0 00:03:25.127 SYMLINK libspdk_blob.so 00:03:25.387 CC lib/blobfs/blobfs.o 00:03:25.387 CC lib/blobfs/tree.o 00:03:25.387 CC lib/lvol/lvol.o 00:03:25.956 LIB libspdk_bdev.a 00:03:25.956 SO libspdk_bdev.so.16.0 00:03:26.216 SYMLINK libspdk_bdev.so 00:03:26.482 CC lib/nvmf/ctrlr.o 00:03:26.482 CC lib/nvmf/ctrlr_bdev.o 00:03:26.482 CC lib/nvmf/ctrlr_discovery.o 00:03:26.482 CC lib/nvmf/nvmf.o 00:03:26.482 CC lib/nvmf/subsystem.o 00:03:26.482 CC lib/nvmf/nvmf_rpc.o 00:03:26.482 CC lib/nbd/nbd.o 00:03:26.482 CC lib/nvmf/transport.o 00:03:26.482 CC lib/nbd/nbd_rpc.o 00:03:26.482 CC lib/ftl/ftl_core.o 00:03:26.482 CC lib/nvmf/stubs.o 00:03:26.482 CC lib/nvmf/tcp.o 00:03:26.482 CC lib/nvmf/mdns_server.o 00:03:26.482 CC lib/scsi/dev.o 00:03:26.482 CC lib/nvmf/rdma.o 00:03:26.482 CC lib/scsi/lun.o 00:03:26.482 CC lib/nvmf/auth.o 00:03:26.482 CC lib/ftl/ftl_init.o 00:03:26.482 CC lib/ftl/ftl_layout.o 00:03:26.482 CC lib/ftl/ftl_debug.o 00:03:26.482 CC lib/scsi/port.o 00:03:26.482 CC lib/scsi/scsi.o 00:03:26.482 CC lib/ftl/ftl_io.o 00:03:26.482 CC lib/scsi/scsi_bdev.o 00:03:26.482 CC lib/ftl/ftl_sb.o 00:03:26.482 CC lib/scsi/scsi_pr.o 00:03:26.482 CC lib/scsi/scsi_rpc.o 00:03:26.482 CC lib/ftl/ftl_l2p.o 00:03:26.482 CC lib/ftl/ftl_l2p_flat.o 00:03:26.482 CC lib/scsi/task.o 00:03:26.482 CC lib/ftl/ftl_nv_cache.o 00:03:26.482 CC lib/ftl/ftl_band_ops.o 00:03:26.482 CC lib/ftl/ftl_band.o 00:03:26.482 CC lib/ftl/ftl_writer.o 00:03:26.482 CC lib/ftl/ftl_reloc.o 00:03:26.482 CC lib/ftl/ftl_rq.o 00:03:26.482 CC lib/ftl/ftl_l2p_cache.o 00:03:26.482 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.482 CC lib/ftl/ftl_p2l.o 00:03:26.482 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.482 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.482 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.482 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.482 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.482 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.482 CC lib/ublk/ublk.o 00:03:26.749 CC lib/ublk/ublk_rpc.o 00:03:26.749 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:26.749 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.749 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:26.749 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.749 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.749 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:27.020 CC lib/ftl/utils/ftl_md.o 00:03:27.020 CC lib/ftl/utils/ftl_mempool.o 00:03:27.020 CC lib/ftl/utils/ftl_conf.o 00:03:27.020 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.020 CC lib/ftl/utils/ftl_property.o 00:03:27.020 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:27.020 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:27.020 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:27.020 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:27.020 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:27.020 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:27.020 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:27.020 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:27.020 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:27.020 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:27.020 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:27.281 CC lib/ftl/base/ftl_base_dev.o 00:03:27.281 CC lib/ftl/base/ftl_base_bdev.o 00:03:27.281 CC lib/ftl/ftl_trace.o 00:03:27.281 LIB libspdk_blobfs.a 00:03:27.281 SO libspdk_blobfs.so.10.0 00:03:27.540 SYMLINK libspdk_blobfs.so 00:03:27.540 LIB libspdk_nbd.a 00:03:27.540 SO libspdk_nbd.so.7.0 00:03:27.799 SYMLINK libspdk_nbd.so 00:03:27.799 LIB libspdk_scsi.a 00:03:27.799 SO libspdk_scsi.so.9.0 00:03:27.799 SYMLINK libspdk_scsi.so 00:03:27.799 LIB libspdk_lvol.a 00:03:27.799 SO libspdk_lvol.so.10.0 00:03:28.058 LIB libspdk_ublk.a 00:03:28.058 SO libspdk_ublk.so.3.0 00:03:28.058 SYMLINK libspdk_lvol.so 00:03:28.058 CC lib/vhost/vhost.o 00:03:28.058 CC lib/iscsi/conn.o 00:03:28.058 CC lib/iscsi/init_grp.o 00:03:28.058 CC lib/vhost/vhost_rpc.o 00:03:28.058 CC lib/iscsi/iscsi.o 00:03:28.058 CC lib/vhost/vhost_scsi.o 00:03:28.058 CC lib/iscsi/param.o 00:03:28.058 CC lib/iscsi/md5.o 00:03:28.058 CC lib/vhost/rte_vhost_user.o 00:03:28.058 CC lib/vhost/vhost_blk.o 00:03:28.058 CC lib/iscsi/portal_grp.o 00:03:28.058 CC lib/iscsi/tgt_node.o 00:03:28.058 CC lib/iscsi/iscsi_subsystem.o 00:03:28.058 CC lib/iscsi/iscsi_rpc.o 00:03:28.058 CC lib/iscsi/task.o 00:03:28.058 SYMLINK libspdk_ublk.so 00:03:28.994 LIB libspdk_ftl.a 00:03:28.994 SO libspdk_ftl.so.9.0 00:03:29.574 LIB libspdk_vhost.a 00:03:29.574 SYMLINK libspdk_ftl.so 00:03:29.574 SO libspdk_vhost.so.8.0 00:03:29.833 SYMLINK libspdk_vhost.so 00:03:30.093 LIB libspdk_iscsi.a 00:03:30.093 SO libspdk_iscsi.so.8.0 00:03:30.353 SYMLINK libspdk_iscsi.so 00:03:30.613 LIB libspdk_nvmf.a 00:03:30.613 SO libspdk_nvmf.so.19.0 00:03:31.183 SYMLINK libspdk_nvmf.so 00:03:31.751 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.751 CC module/accel/error/accel_error.o 00:03:31.751 CC module/accel/error/accel_error_rpc.o 00:03:31.751 CC module/blob/bdev/blob_bdev.o 00:03:31.751 CC module/accel/dsa/accel_dsa.o 00:03:31.751 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.751 CC module/accel/ioat/accel_ioat.o 00:03:31.751 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.751 CC module/accel/iaa/accel_iaa.o 00:03:31.751 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.751 CC module/keyring/linux/keyring.o 00:03:31.751 CC module/keyring/linux/keyring_rpc.o 00:03:31.751 CC module/sock/posix/posix.o 00:03:31.751 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.751 CC module/keyring/file/keyring.o 00:03:31.751 CC module/keyring/file/keyring_rpc.o 00:03:31.751 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.751 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.751 LIB libspdk_env_dpdk_rpc.a 00:03:31.751 SO libspdk_env_dpdk_rpc.so.6.0 00:03:32.010 SYMLINK libspdk_env_dpdk_rpc.so 00:03:32.010 LIB libspdk_keyring_file.a 00:03:32.010 LIB libspdk_accel_error.a 00:03:32.010 LIB libspdk_scheduler_gscheduler.a 00:03:32.010 SO libspdk_keyring_file.so.1.0 00:03:32.010 SO libspdk_accel_error.so.2.0 00:03:32.010 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.010 LIB libspdk_keyring_linux.a 00:03:32.010 SO libspdk_scheduler_gscheduler.so.4.0 00:03:32.010 LIB libspdk_accel_iaa.a 00:03:32.010 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:32.010 SO libspdk_keyring_linux.so.1.0 00:03:32.010 SO libspdk_accel_iaa.so.3.0 00:03:32.010 SYMLINK libspdk_keyring_file.so 00:03:32.010 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.010 LIB libspdk_accel_ioat.a 00:03:32.010 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.010 SYMLINK libspdk_accel_error.so 00:03:32.010 SO libspdk_accel_ioat.so.6.0 00:03:32.010 SYMLINK libspdk_keyring_linux.so 00:03:32.010 LIB libspdk_scheduler_dynamic.a 00:03:32.269 SYMLINK libspdk_accel_iaa.so 00:03:32.269 SO libspdk_scheduler_dynamic.so.4.0 00:03:32.269 SYMLINK libspdk_accel_ioat.so 00:03:32.269 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.269 LIB libspdk_blob_bdev.a 00:03:32.269 LIB libspdk_accel_dsa.a 00:03:32.269 SO libspdk_blob_bdev.so.11.0 00:03:32.269 SO libspdk_accel_dsa.so.5.0 00:03:32.528 SYMLINK libspdk_blob_bdev.so 00:03:32.528 SYMLINK libspdk_accel_dsa.so 00:03:32.789 CC module/blobfs/bdev/blobfs_bdev.o 00:03:32.789 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:32.789 CC module/bdev/gpt/gpt.o 00:03:32.789 CC module/bdev/malloc/bdev_malloc.o 00:03:32.789 CC module/bdev/gpt/vbdev_gpt.o 00:03:32.789 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.789 CC module/bdev/delay/vbdev_delay.o 00:03:32.789 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:32.789 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.789 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.789 CC module/bdev/raid/bdev_raid.o 00:03:32.789 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.789 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.789 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.789 CC module/bdev/raid/raid0.o 00:03:32.789 CC module/bdev/raid/raid1.o 00:03:32.789 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.789 CC module/bdev/raid/concat.o 00:03:32.789 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.789 CC module/bdev/null/bdev_null.o 00:03:32.789 CC module/bdev/null/bdev_null_rpc.o 00:03:32.789 CC module/bdev/aio/bdev_aio.o 00:03:32.789 CC module/bdev/error/vbdev_error.o 00:03:32.789 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.790 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.790 CC module/bdev/lvol/vbdev_lvol.o 00:03:32.790 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.790 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.790 CC module/bdev/error/vbdev_error_rpc.o 00:03:32.790 CC module/bdev/ftl/bdev_ftl.o 00:03:32.790 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.790 CC module/bdev/split/vbdev_split.o 00:03:32.790 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.790 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.790 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.790 CC module/bdev/nvme/bdev_nvme.o 00:03:32.790 CC module/bdev/nvme/nvme_rpc.o 00:03:32.790 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.790 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.790 CC module/bdev/nvme/vbdev_opal.o 00:03:32.790 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.790 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.358 LIB libspdk_blobfs_bdev.a 00:03:33.358 LIB libspdk_bdev_split.a 00:03:33.358 SO libspdk_blobfs_bdev.so.6.0 00:03:33.358 SO libspdk_bdev_split.so.6.0 00:03:33.358 LIB libspdk_bdev_gpt.a 00:03:33.358 LIB libspdk_bdev_ftl.a 00:03:33.358 SO libspdk_bdev_ftl.so.6.0 00:03:33.358 SO libspdk_bdev_gpt.so.6.0 00:03:33.358 LIB libspdk_bdev_malloc.a 00:03:33.358 SYMLINK libspdk_blobfs_bdev.so 00:03:33.358 LIB libspdk_bdev_error.a 00:03:33.358 SYMLINK libspdk_bdev_split.so 00:03:33.358 LIB libspdk_sock_posix.a 00:03:33.358 SO libspdk_bdev_malloc.so.6.0 00:03:33.358 SO libspdk_bdev_error.so.6.0 00:03:33.358 SO libspdk_sock_posix.so.6.0 00:03:33.358 SYMLINK libspdk_bdev_gpt.so 00:03:33.358 LIB libspdk_bdev_passthru.a 00:03:33.358 SYMLINK libspdk_bdev_ftl.so 00:03:33.358 SYMLINK libspdk_bdev_malloc.so 00:03:33.358 SYMLINK libspdk_bdev_error.so 00:03:33.358 SO libspdk_bdev_passthru.so.6.0 00:03:33.358 LIB libspdk_bdev_null.a 00:03:33.358 SYMLINK libspdk_sock_posix.so 00:03:33.358 SO libspdk_bdev_null.so.6.0 00:03:33.618 LIB libspdk_bdev_aio.a 00:03:33.618 LIB libspdk_bdev_zone_block.a 00:03:33.618 SO libspdk_bdev_aio.so.6.0 00:03:33.618 SO libspdk_bdev_zone_block.so.6.0 00:03:33.618 SYMLINK libspdk_bdev_null.so 00:03:33.618 LIB libspdk_bdev_delay.a 00:03:33.618 SYMLINK libspdk_bdev_passthru.so 00:03:33.618 SO libspdk_bdev_delay.so.6.0 00:03:33.618 SYMLINK libspdk_bdev_aio.so 00:03:33.618 LIB libspdk_bdev_iscsi.a 00:03:33.618 SO libspdk_bdev_iscsi.so.6.0 00:03:33.618 LIB libspdk_bdev_lvol.a 00:03:33.618 SYMLINK libspdk_bdev_zone_block.so 00:03:33.618 SYMLINK libspdk_bdev_delay.so 00:03:33.618 SO libspdk_bdev_lvol.so.6.0 00:03:33.618 SYMLINK libspdk_bdev_iscsi.so 00:03:33.618 SYMLINK libspdk_bdev_lvol.so 00:03:34.187 LIB libspdk_bdev_virtio.a 00:03:34.187 SO libspdk_bdev_virtio.so.6.0 00:03:34.187 SYMLINK libspdk_bdev_virtio.so 00:03:36.098 LIB libspdk_bdev_nvme.a 00:03:36.098 LIB libspdk_bdev_raid.a 00:03:36.098 SO libspdk_bdev_nvme.so.7.0 00:03:36.098 SO libspdk_bdev_raid.so.6.0 00:03:36.098 SYMLINK libspdk_bdev_nvme.so 00:03:36.098 SYMLINK libspdk_bdev_raid.so 00:03:36.667 CC module/event/subsystems/sock/sock.o 00:03:36.667 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.667 CC module/event/subsystems/keyring/keyring.o 00:03:36.667 CC module/event/subsystems/vmd/vmd.o 00:03:36.667 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:36.667 CC module/event/subsystems/iobuf/iobuf.o 00:03:36.667 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.667 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.927 LIB libspdk_event_keyring.a 00:03:36.927 LIB libspdk_event_sock.a 00:03:36.927 SO libspdk_event_sock.so.5.0 00:03:36.927 SO libspdk_event_keyring.so.1.0 00:03:36.927 SYMLINK libspdk_event_keyring.so 00:03:36.927 SYMLINK libspdk_event_sock.so 00:03:36.927 LIB libspdk_event_vhost_blk.a 00:03:36.927 LIB libspdk_event_vmd.a 00:03:36.927 LIB libspdk_event_scheduler.a 00:03:36.927 SO libspdk_event_vhost_blk.so.3.0 00:03:36.927 LIB libspdk_event_iobuf.a 00:03:36.927 SO libspdk_event_scheduler.so.4.0 00:03:36.927 SO libspdk_event_vmd.so.6.0 00:03:37.187 SO libspdk_event_iobuf.so.3.0 00:03:37.187 SYMLINK libspdk_event_vhost_blk.so 00:03:37.187 SYMLINK libspdk_event_scheduler.so 00:03:37.187 SYMLINK libspdk_event_vmd.so 00:03:37.187 SYMLINK libspdk_event_iobuf.so 00:03:37.448 CC module/event/subsystems/accel/accel.o 00:03:37.709 LIB libspdk_event_accel.a 00:03:37.982 SO libspdk_event_accel.so.6.0 00:03:37.982 SYMLINK libspdk_event_accel.so 00:03:38.255 CC module/event/subsystems/bdev/bdev.o 00:03:38.823 LIB libspdk_event_bdev.a 00:03:38.823 SO libspdk_event_bdev.so.6.0 00:03:38.823 SYMLINK libspdk_event_bdev.so 00:03:39.084 CC module/event/subsystems/ublk/ublk.o 00:03:39.084 CC module/event/subsystems/nbd/nbd.o 00:03:39.084 CC module/event/subsystems/scsi/scsi.o 00:03:39.084 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:39.084 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:39.345 LIB libspdk_event_scsi.a 00:03:39.345 SO libspdk_event_scsi.so.6.0 00:03:39.345 LIB libspdk_event_nbd.a 00:03:39.605 LIB libspdk_event_ublk.a 00:03:39.605 SO libspdk_event_nbd.so.6.0 00:03:39.605 SYMLINK libspdk_event_scsi.so 00:03:39.605 SO libspdk_event_ublk.so.3.0 00:03:39.605 SYMLINK libspdk_event_nbd.so 00:03:39.605 SYMLINK libspdk_event_ublk.so 00:03:39.605 LIB libspdk_event_nvmf.a 00:03:39.605 SO libspdk_event_nvmf.so.6.0 00:03:39.864 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.864 SYMLINK libspdk_event_nvmf.so 00:03:39.864 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:40.124 LIB libspdk_event_vhost_scsi.a 00:03:40.124 LIB libspdk_event_iscsi.a 00:03:40.124 SO libspdk_event_vhost_scsi.so.3.0 00:03:40.124 SO libspdk_event_iscsi.so.6.0 00:03:40.124 SYMLINK libspdk_event_vhost_scsi.so 00:03:40.124 SYMLINK libspdk_event_iscsi.so 00:03:40.384 SO libspdk.so.6.0 00:03:40.384 SYMLINK libspdk.so 00:03:40.654 CC app/spdk_lspci/spdk_lspci.o 00:03:40.654 CC app/spdk_nvme_identify/identify.o 00:03:40.654 CC app/spdk_top/spdk_top.o 00:03:40.654 CC app/spdk_nvme_perf/perf.o 00:03:40.654 CC app/trace_record/trace_record.o 00:03:40.654 CXX app/trace/trace.o 00:03:40.654 TEST_HEADER include/spdk/accel.h 00:03:40.654 TEST_HEADER include/spdk/accel_module.h 00:03:40.654 TEST_HEADER include/spdk/assert.h 00:03:40.654 TEST_HEADER include/spdk/barrier.h 00:03:40.654 TEST_HEADER include/spdk/base64.h 00:03:40.654 CC test/rpc_client/rpc_client_test.o 00:03:40.654 TEST_HEADER include/spdk/bdev.h 00:03:40.654 TEST_HEADER include/spdk/bdev_module.h 00:03:40.654 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.654 TEST_HEADER include/spdk/bit_array.h 00:03:40.654 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.654 TEST_HEADER include/spdk/bit_pool.h 00:03:40.654 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.654 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.654 TEST_HEADER include/spdk/blobfs.h 00:03:40.654 TEST_HEADER include/spdk/blob.h 00:03:40.654 TEST_HEADER include/spdk/conf.h 00:03:40.654 TEST_HEADER include/spdk/cpuset.h 00:03:40.654 TEST_HEADER include/spdk/config.h 00:03:40.654 TEST_HEADER include/spdk/crc16.h 00:03:40.654 TEST_HEADER include/spdk/crc32.h 00:03:40.654 TEST_HEADER include/spdk/crc64.h 00:03:40.654 TEST_HEADER include/spdk/dif.h 00:03:40.654 TEST_HEADER include/spdk/dma.h 00:03:40.654 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.654 TEST_HEADER include/spdk/endian.h 00:03:40.654 TEST_HEADER include/spdk/env.h 00:03:40.654 TEST_HEADER include/spdk/event.h 00:03:40.654 TEST_HEADER include/spdk/fd_group.h 00:03:40.654 TEST_HEADER include/spdk/fd.h 00:03:40.654 TEST_HEADER include/spdk/file.h 00:03:40.654 TEST_HEADER include/spdk/ftl.h 00:03:40.654 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.654 TEST_HEADER include/spdk/hexlify.h 00:03:40.654 TEST_HEADER include/spdk/histogram_data.h 00:03:40.654 TEST_HEADER include/spdk/idxd.h 00:03:40.654 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.654 TEST_HEADER include/spdk/init.h 00:03:40.654 TEST_HEADER include/spdk/ioat.h 00:03:40.654 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.654 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.654 TEST_HEADER include/spdk/json.h 00:03:40.654 TEST_HEADER include/spdk/keyring.h 00:03:40.654 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.654 TEST_HEADER include/spdk/keyring_module.h 00:03:40.654 TEST_HEADER include/spdk/likely.h 00:03:40.654 TEST_HEADER include/spdk/log.h 00:03:40.654 TEST_HEADER include/spdk/lvol.h 00:03:40.654 TEST_HEADER include/spdk/memory.h 00:03:40.654 TEST_HEADER include/spdk/mmio.h 00:03:40.654 TEST_HEADER include/spdk/nbd.h 00:03:40.654 TEST_HEADER include/spdk/net.h 00:03:40.654 TEST_HEADER include/spdk/notify.h 00:03:40.654 TEST_HEADER include/spdk/nvme.h 00:03:40.654 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.654 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.654 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.654 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.654 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.655 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.655 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.655 TEST_HEADER include/spdk/nvmf.h 00:03:40.655 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.655 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.655 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.655 TEST_HEADER include/spdk/opal_spec.h 00:03:40.655 TEST_HEADER include/spdk/opal.h 00:03:40.655 TEST_HEADER include/spdk/pipe.h 00:03:40.655 TEST_HEADER include/spdk/pci_ids.h 00:03:40.655 TEST_HEADER include/spdk/queue.h 00:03:40.655 TEST_HEADER include/spdk/reduce.h 00:03:40.655 TEST_HEADER include/spdk/rpc.h 00:03:40.655 TEST_HEADER include/spdk/scheduler.h 00:03:40.655 TEST_HEADER include/spdk/scsi.h 00:03:40.655 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.655 TEST_HEADER include/spdk/sock.h 00:03:40.655 TEST_HEADER include/spdk/stdinc.h 00:03:40.655 TEST_HEADER include/spdk/thread.h 00:03:40.655 TEST_HEADER include/spdk/string.h 00:03:40.655 TEST_HEADER include/spdk/trace.h 00:03:40.655 TEST_HEADER include/spdk/trace_parser.h 00:03:40.655 TEST_HEADER include/spdk/tree.h 00:03:40.655 TEST_HEADER include/spdk/ublk.h 00:03:40.655 CC app/iscsi_tgt/iscsi_tgt.o 00:03:40.655 TEST_HEADER include/spdk/util.h 00:03:40.655 TEST_HEADER include/spdk/uuid.h 00:03:40.655 TEST_HEADER include/spdk/version.h 00:03:40.655 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.655 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.655 TEST_HEADER include/spdk/vhost.h 00:03:40.655 TEST_HEADER include/spdk/vmd.h 00:03:40.655 CC app/nvmf_tgt/nvmf_main.o 00:03:40.655 TEST_HEADER include/spdk/xor.h 00:03:40.655 TEST_HEADER include/spdk/zipf.h 00:03:40.655 CXX test/cpp_headers/accel.o 00:03:40.655 CXX test/cpp_headers/accel_module.o 00:03:40.655 CXX test/cpp_headers/assert.o 00:03:40.655 CXX test/cpp_headers/base64.o 00:03:40.655 CXX test/cpp_headers/barrier.o 00:03:40.655 CXX test/cpp_headers/bdev.o 00:03:40.655 CXX test/cpp_headers/bdev_module.o 00:03:40.655 CXX test/cpp_headers/bdev_zone.o 00:03:40.655 CXX test/cpp_headers/bit_array.o 00:03:40.655 CXX test/cpp_headers/blob_bdev.o 00:03:40.655 CXX test/cpp_headers/bit_pool.o 00:03:40.655 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.655 CXX test/cpp_headers/blobfs.o 00:03:40.655 CXX test/cpp_headers/blob.o 00:03:40.655 CXX test/cpp_headers/conf.o 00:03:40.655 CXX test/cpp_headers/config.o 00:03:40.655 CXX test/cpp_headers/cpuset.o 00:03:40.655 CXX test/cpp_headers/crc16.o 00:03:40.655 CC app/spdk_dd/spdk_dd.o 00:03:40.655 CC examples/util/zipf/zipf.o 00:03:40.655 CXX test/cpp_headers/crc32.o 00:03:40.914 CC examples/ioat/perf/perf.o 00:03:40.914 CC examples/ioat/verify/verify.o 00:03:40.914 CC test/app/histogram_perf/histogram_perf.o 00:03:40.914 CC app/fio/nvme/fio_plugin.o 00:03:40.914 CC test/app/jsoncat/jsoncat.o 00:03:40.914 CC test/env/vtophys/vtophys.o 00:03:40.914 CC test/thread/poller_perf/poller_perf.o 00:03:40.914 CC test/app/stub/stub.o 00:03:40.914 CC test/env/pci/pci_ut.o 00:03:40.914 CC test/env/memory/memory_ut.o 00:03:40.914 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.914 CC app/spdk_tgt/spdk_tgt.o 00:03:40.914 CC test/dma/test_dma/test_dma.o 00:03:40.914 CC test/app/bdev_svc/bdev_svc.o 00:03:40.914 CC app/fio/bdev/fio_plugin.o 00:03:40.914 LINK spdk_lspci 00:03:40.914 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.914 CC test/env/mem_callbacks/mem_callbacks.o 00:03:41.182 LINK spdk_nvme_discover 00:03:41.182 LINK rpc_client_test 00:03:41.182 LINK nvmf_tgt 00:03:41.182 LINK iscsi_tgt 00:03:41.182 LINK histogram_perf 00:03:41.182 LINK zipf 00:03:41.182 LINK interrupt_tgt 00:03:41.182 CXX test/cpp_headers/crc64.o 00:03:41.182 CXX test/cpp_headers/dif.o 00:03:41.182 CXX test/cpp_headers/dma.o 00:03:41.182 LINK env_dpdk_post_init 00:03:41.182 LINK jsoncat 00:03:41.182 LINK poller_perf 00:03:41.182 CXX test/cpp_headers/endian.o 00:03:41.182 LINK vtophys 00:03:41.182 CXX test/cpp_headers/env_dpdk.o 00:03:41.182 CXX test/cpp_headers/env.o 00:03:41.182 CXX test/cpp_headers/event.o 00:03:41.182 LINK bdev_svc 00:03:41.182 CXX test/cpp_headers/fd_group.o 00:03:41.182 CXX test/cpp_headers/fd.o 00:03:41.182 CXX test/cpp_headers/file.o 00:03:41.182 CXX test/cpp_headers/ftl.o 00:03:41.182 CXX test/cpp_headers/gpt_spec.o 00:03:41.182 LINK spdk_tgt 00:03:41.182 CXX test/cpp_headers/histogram_data.o 00:03:41.182 CXX test/cpp_headers/hexlify.o 00:03:41.449 LINK ioat_perf 00:03:41.449 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.450 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.450 LINK stub 00:03:41.450 CXX test/cpp_headers/idxd.o 00:03:41.450 CXX test/cpp_headers/idxd_spec.o 00:03:41.450 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:41.450 LINK verify 00:03:41.450 LINK spdk_trace_record 00:03:41.450 CXX test/cpp_headers/init.o 00:03:41.450 CXX test/cpp_headers/ioat.o 00:03:41.711 CXX test/cpp_headers/ioat_spec.o 00:03:41.711 CXX test/cpp_headers/iscsi_spec.o 00:03:41.711 CXX test/cpp_headers/json.o 00:03:41.711 CXX test/cpp_headers/jsonrpc.o 00:03:41.711 LINK spdk_trace 00:03:41.711 CXX test/cpp_headers/keyring.o 00:03:41.711 CXX test/cpp_headers/keyring_module.o 00:03:41.711 CXX test/cpp_headers/likely.o 00:03:41.711 CXX test/cpp_headers/log.o 00:03:41.711 CXX test/cpp_headers/lvol.o 00:03:41.711 CXX test/cpp_headers/memory.o 00:03:41.711 CXX test/cpp_headers/mmio.o 00:03:41.711 CXX test/cpp_headers/nbd.o 00:03:41.711 CXX test/cpp_headers/net.o 00:03:41.711 CXX test/cpp_headers/notify.o 00:03:41.711 CXX test/cpp_headers/nvme.o 00:03:41.711 CXX test/cpp_headers/nvme_intel.o 00:03:41.711 CXX test/cpp_headers/nvme_ocssd.o 00:03:41.711 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.711 CXX test/cpp_headers/nvme_spec.o 00:03:41.711 CXX test/cpp_headers/nvme_zns.o 00:03:41.711 LINK test_dma 00:03:41.711 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.711 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.711 CXX test/cpp_headers/nvmf.o 00:03:41.711 CXX test/cpp_headers/nvmf_spec.o 00:03:41.711 LINK spdk_dd 00:03:41.978 CXX test/cpp_headers/nvmf_transport.o 00:03:41.978 CXX test/cpp_headers/opal.o 00:03:41.978 CXX test/cpp_headers/opal_spec.o 00:03:41.978 CC examples/sock/hello_world/hello_sock.o 00:03:41.978 CXX test/cpp_headers/pci_ids.o 00:03:41.978 CXX test/cpp_headers/pipe.o 00:03:41.978 CC examples/vmd/lsvmd/lsvmd.o 00:03:41.978 CC test/event/event_perf/event_perf.o 00:03:41.978 CC examples/thread/thread/thread_ex.o 00:03:41.978 CC test/event/reactor/reactor.o 00:03:41.978 CC examples/vmd/led/led.o 00:03:41.978 CC examples/idxd/perf/perf.o 00:03:41.978 LINK pci_ut 00:03:41.978 CXX test/cpp_headers/queue.o 00:03:41.978 CC test/event/reactor_perf/reactor_perf.o 00:03:41.978 CXX test/cpp_headers/reduce.o 00:03:42.241 CXX test/cpp_headers/rpc.o 00:03:42.241 CC test/event/app_repeat/app_repeat.o 00:03:42.241 CXX test/cpp_headers/scheduler.o 00:03:42.241 CXX test/cpp_headers/scsi.o 00:03:42.241 CXX test/cpp_headers/scsi_spec.o 00:03:42.241 CXX test/cpp_headers/sock.o 00:03:42.241 CXX test/cpp_headers/stdinc.o 00:03:42.241 CXX test/cpp_headers/string.o 00:03:42.241 CXX test/cpp_headers/thread.o 00:03:42.241 CXX test/cpp_headers/trace.o 00:03:42.241 CXX test/cpp_headers/trace_parser.o 00:03:42.241 CXX test/cpp_headers/tree.o 00:03:42.241 CXX test/cpp_headers/ublk.o 00:03:42.241 CXX test/cpp_headers/util.o 00:03:42.241 CXX test/cpp_headers/uuid.o 00:03:42.241 CXX test/cpp_headers/version.o 00:03:42.241 CXX test/cpp_headers/vfio_user_pci.o 00:03:42.241 LINK vhost_fuzz 00:03:42.241 CC test/event/scheduler/scheduler.o 00:03:42.241 LINK reactor 00:03:42.241 CXX test/cpp_headers/vfio_user_spec.o 00:03:42.241 CXX test/cpp_headers/vhost.o 00:03:42.241 CXX test/cpp_headers/vmd.o 00:03:42.241 CXX test/cpp_headers/xor.o 00:03:42.241 LINK nvme_fuzz 00:03:42.241 CC app/vhost/vhost.o 00:03:42.241 LINK led 00:03:42.241 CXX test/cpp_headers/zipf.o 00:03:42.506 LINK event_perf 00:03:42.506 LINK reactor_perf 00:03:42.506 LINK lsvmd 00:03:42.506 LINK spdk_bdev 00:03:42.506 LINK app_repeat 00:03:42.506 LINK hello_sock 00:03:42.506 LINK thread 00:03:42.766 LINK spdk_nvme 00:03:42.766 LINK mem_callbacks 00:03:42.766 LINK vhost 00:03:42.766 CC test/accel/dif/dif.o 00:03:42.766 CC test/nvme/reset/reset.o 00:03:42.766 CC test/nvme/startup/startup.o 00:03:42.766 CC test/nvme/connect_stress/connect_stress.o 00:03:42.766 CC test/nvme/e2edp/nvme_dp.o 00:03:42.766 CC test/nvme/simple_copy/simple_copy.o 00:03:42.766 CC test/nvme/sgl/sgl.o 00:03:42.766 CC test/nvme/err_injection/err_injection.o 00:03:42.766 CC test/nvme/overhead/overhead.o 00:03:42.766 CC test/nvme/reserve/reserve.o 00:03:42.766 CC test/nvme/aer/aer.o 00:03:42.766 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.766 CC test/nvme/compliance/nvme_compliance.o 00:03:42.766 CC test/blobfs/mkfs/mkfs.o 00:03:42.766 CC test/nvme/boot_partition/boot_partition.o 00:03:42.766 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.766 CC test/nvme/fdp/fdp.o 00:03:42.766 LINK scheduler 00:03:42.766 CC test/nvme/cuse/cuse.o 00:03:42.766 LINK spdk_nvme_identify 00:03:42.766 CC test/lvol/esnap/esnap.o 00:03:43.025 LINK idxd_perf 00:03:43.025 LINK boot_partition 00:03:43.025 LINK connect_stress 00:03:43.025 CC examples/nvme/abort/abort.o 00:03:43.285 CC examples/nvme/hotplug/hotplug.o 00:03:43.285 LINK err_injection 00:03:43.285 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:43.285 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:43.285 CC examples/nvme/hello_world/hello_world.o 00:03:43.285 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:43.285 CC examples/nvme/reconnect/reconnect.o 00:03:43.285 CC examples/nvme/arbitration/arbitration.o 00:03:43.285 CC examples/accel/perf/accel_perf.o 00:03:43.285 LINK reset 00:03:43.285 LINK mkfs 00:03:43.285 LINK doorbell_aers 00:03:43.285 LINK reserve 00:03:43.285 CC examples/blob/hello_world/hello_blob.o 00:03:43.285 LINK simple_copy 00:03:43.285 CC examples/blob/cli/blobcli.o 00:03:43.285 LINK startup 00:03:43.285 LINK memory_ut 00:03:43.285 LINK aer 00:03:43.285 LINK fused_ordering 00:03:43.285 LINK spdk_top 00:03:43.285 LINK fdp 00:03:43.285 LINK spdk_nvme_perf 00:03:43.285 LINK sgl 00:03:43.545 LINK cmb_copy 00:03:43.545 LINK overhead 00:03:43.545 LINK nvme_compliance 00:03:43.545 LINK hello_world 00:03:43.545 LINK pmr_persistence 00:03:43.545 LINK nvme_dp 00:03:43.545 LINK hello_blob 00:03:43.803 LINK hotplug 00:03:43.803 LINK dif 00:03:43.803 LINK arbitration 00:03:44.063 LINK reconnect 00:03:44.063 LINK abort 00:03:44.063 LINK nvme_manage 00:03:44.063 LINK blobcli 00:03:44.063 LINK accel_perf 00:03:44.323 CC test/bdev/bdevio/bdevio.o 00:03:44.892 CC examples/bdev/hello_world/hello_bdev.o 00:03:44.892 CC examples/bdev/bdevperf/bdevperf.o 00:03:44.892 LINK iscsi_fuzz 00:03:45.151 LINK cuse 00:03:45.151 LINK bdevio 00:03:45.151 LINK hello_bdev 00:03:46.531 LINK bdevperf 00:03:47.098 CC examples/nvmf/nvmf/nvmf.o 00:03:47.666 LINK nvmf 00:03:57.689 LINK esnap 00:03:57.956 00:03:57.956 real 2m20.923s 00:03:57.956 user 14m45.947s 00:03:57.956 sys 2m57.031s 00:03:57.956 08:17:10 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:57.956 08:17:10 make -- common/autotest_common.sh@10 -- $ set +x 00:03:57.956 ************************************ 00:03:57.956 END TEST make 00:03:57.956 ************************************ 00:03:57.956 08:17:10 -- common/autotest_common.sh@1142 -- $ return 0 00:03:57.956 08:17:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:57.956 08:17:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:57.956 08:17:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:57.956 08:17:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.956 08:17:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:57.956 08:17:10 -- pm/common@44 -- $ pid=2072899 00:03:57.956 08:17:10 -- pm/common@50 -- $ kill -TERM 2072899 00:03:57.956 08:17:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.956 08:17:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:57.956 08:17:10 -- pm/common@44 -- $ pid=2072901 00:03:57.956 08:17:10 -- pm/common@50 -- $ kill -TERM 2072901 00:03:57.956 08:17:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.956 08:17:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:57.956 08:17:10 -- pm/common@44 -- $ pid=2072903 00:03:57.956 08:17:10 -- pm/common@50 -- $ kill -TERM 2072903 00:03:57.956 08:17:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.956 08:17:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:57.956 08:17:10 -- pm/common@44 -- $ pid=2072928 00:03:57.956 08:17:10 -- pm/common@50 -- $ sudo -E kill -TERM 2072928 00:03:57.956 08:17:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.956 08:17:10 -- nvmf/common.sh@7 -- # uname -s 00:03:57.956 08:17:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.956 08:17:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.956 08:17:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.956 08:17:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.956 08:17:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.956 08:17:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.956 08:17:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.956 08:17:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.956 08:17:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.956 08:17:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.956 08:17:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:57.956 08:17:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:57.956 08:17:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.956 08:17:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.956 08:17:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:57.956 08:17:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.956 08:17:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.956 08:17:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.957 08:17:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.957 08:17:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.957 08:17:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.957 08:17:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.957 08:17:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.957 08:17:10 -- paths/export.sh@5 -- # export PATH 00:03:57.957 08:17:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.957 08:17:10 -- nvmf/common.sh@47 -- # : 0 00:03:57.957 08:17:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:57.957 08:17:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:57.957 08:17:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.957 08:17:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.957 08:17:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.957 08:17:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:57.957 08:17:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:57.957 08:17:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:57.957 08:17:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.957 08:17:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.957 08:17:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.957 08:17:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.957 08:17:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.957 08:17:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.957 08:17:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.957 08:17:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.957 08:17:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.957 08:17:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:57.957 08:17:10 -- spdk/autotest.sh@48 -- # udevadm_pid=2139978 00:03:57.957 08:17:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:57.957 08:17:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:57.957 08:17:10 -- pm/common@17 -- # local monitor 00:03:57.957 08:17:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.957 08:17:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.957 08:17:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.957 08:17:10 -- pm/common@21 -- # date +%s 00:03:57.957 08:17:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.957 08:17:10 -- pm/common@25 -- # sleep 1 00:03:57.957 08:17:10 -- pm/common@21 -- # date +%s 00:03:57.957 08:17:10 -- pm/common@21 -- # date +%s 00:03:57.957 08:17:10 -- pm/common@21 -- # date +%s 00:03:57.957 08:17:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:03:57.957 08:17:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:03:57.957 08:17:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:03:57.957 08:17:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:03:57.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715430_collect-cpu-load.pm.log 00:03:57.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715430_collect-vmstat.pm.log 00:03:57.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715430_collect-cpu-temp.pm.log 00:03:57.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715430_collect-bmc-pm.bmc.pm.log 00:03:58.939 08:17:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:58.939 08:17:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:58.939 08:17:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.939 08:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:58.939 08:17:11 -- spdk/autotest.sh@59 -- # create_test_list 00:03:58.939 08:17:11 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:58.939 08:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:58.939 08:17:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:58.939 08:17:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.939 08:17:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.939 08:17:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:58.939 08:17:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.939 08:17:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:58.940 08:17:11 -- common/autotest_common.sh@1455 -- # uname 00:03:58.940 08:17:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:58.940 08:17:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:58.940 08:17:11 -- common/autotest_common.sh@1475 -- # uname 00:03:59.198 08:17:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:59.198 08:17:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:59.198 08:17:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:59.198 08:17:11 -- spdk/autotest.sh@72 -- # hash lcov 00:03:59.198 08:17:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:59.198 08:17:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:59.198 --rc lcov_branch_coverage=1 00:03:59.198 --rc lcov_function_coverage=1 00:03:59.198 --rc genhtml_branch_coverage=1 00:03:59.198 --rc genhtml_function_coverage=1 00:03:59.198 --rc genhtml_legend=1 00:03:59.198 --rc geninfo_all_blocks=1 00:03:59.199 ' 00:03:59.199 08:17:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:59.199 --rc lcov_branch_coverage=1 00:03:59.199 --rc lcov_function_coverage=1 00:03:59.199 --rc genhtml_branch_coverage=1 00:03:59.199 --rc genhtml_function_coverage=1 00:03:59.199 --rc genhtml_legend=1 00:03:59.199 --rc geninfo_all_blocks=1 00:03:59.199 ' 00:03:59.199 08:17:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:59.199 --rc lcov_branch_coverage=1 00:03:59.199 --rc lcov_function_coverage=1 00:03:59.199 --rc genhtml_branch_coverage=1 00:03:59.199 --rc genhtml_function_coverage=1 00:03:59.199 --rc genhtml_legend=1 00:03:59.199 --rc geninfo_all_blocks=1 00:03:59.199 --no-external' 00:03:59.199 08:17:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:59.199 --rc lcov_branch_coverage=1 00:03:59.199 --rc lcov_function_coverage=1 00:03:59.199 --rc genhtml_branch_coverage=1 00:03:59.199 --rc genhtml_function_coverage=1 00:03:59.199 --rc genhtml_legend=1 00:03:59.199 --rc geninfo_all_blocks=1 00:03:59.199 --no-external' 00:03:59.199 08:17:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:59.199 lcov: LCOV version 1.14 00:03:59.199 08:17:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:31.302 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:31.303 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:31.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:31.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:31.304 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:03.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:03.397 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.081 08:18:38 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:30.081 08:18:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.081 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.081 08:18:38 -- spdk/autotest.sh@91 -- # rm -f 00:05:30.081 08:18:38 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:30.081 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:05:30.081 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:30.081 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:30.081 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:30.081 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:30.081 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:30.081 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:30.081 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:30.081 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:30.081 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:30.081 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:30.081 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:30.081 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:30.081 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:30.081 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:30.081 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:30.081 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:30.081 08:18:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:30.081 08:18:40 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:30.081 08:18:40 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:30.081 08:18:40 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:30.081 08:18:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:30.081 08:18:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:30.081 08:18:40 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:30.081 08:18:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.081 08:18:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:30.081 08:18:40 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:30.081 08:18:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.081 08:18:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.081 08:18:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:30.081 08:18:40 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:30.082 08:18:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.082 No valid GPT data, bailing 00:05:30.082 08:18:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.082 08:18:41 -- scripts/common.sh@391 -- # pt= 00:05:30.082 08:18:41 -- scripts/common.sh@392 -- # return 1 00:05:30.082 08:18:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.082 1+0 records in 00:05:30.082 1+0 records out 00:05:30.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235359 s, 446 MB/s 00:05:30.082 08:18:41 -- spdk/autotest.sh@118 -- # sync 00:05:30.082 08:18:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:30.082 08:18:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:30.082 08:18:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:31.463 08:18:43 -- spdk/autotest.sh@124 -- # uname -s 00:05:31.463 08:18:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:31.463 08:18:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:31.463 08:18:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.463 08:18:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.463 08:18:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.724 ************************************ 00:05:31.724 START TEST setup.sh 00:05:31.724 ************************************ 00:05:31.724 08:18:44 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:31.724 * Looking for test storage... 00:05:31.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:31.724 08:18:44 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:31.724 08:18:44 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:31.724 08:18:44 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:31.724 08:18:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.724 08:18:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.724 08:18:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.724 ************************************ 00:05:31.724 START TEST acl 00:05:31.724 ************************************ 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:31.724 * Looking for test storage... 00:05:31.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:31.724 08:18:44 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.724 08:18:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:31.724 08:18:44 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:31.724 08:18:44 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:31.724 08:18:44 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:31.724 08:18:44 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:31.724 08:18:44 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:31.724 08:18:44 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.724 08:18:44 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:34.264 08:18:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:34.264 08:18:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:34.264 08:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.264 08:18:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:34.264 08:18:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.264 08:18:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.170 Hugepages 00:05:36.170 node hugesize free / total 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 00:05:36.170 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.170 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:36.171 08:18:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:36.171 08:18:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.171 08:18:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.171 08:18:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:36.171 ************************************ 00:05:36.171 START TEST denied 00:05:36.171 ************************************ 00:05:36.171 08:18:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:36.171 08:18:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:05:36.171 08:18:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:36.171 08:18:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:05:36.171 08:18:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.171 08:18:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:38.707 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:38.707 08:18:50 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:42.002 00:05:42.002 real 0m5.481s 00:05:42.002 user 0m1.692s 00:05:42.002 sys 0m2.852s 00:05:42.002 08:18:54 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.002 08:18:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:42.002 ************************************ 00:05:42.002 END TEST denied 00:05:42.002 ************************************ 00:05:42.002 08:18:54 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:42.002 08:18:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:42.002 08:18:54 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.002 08:18:54 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.002 08:18:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:42.002 ************************************ 00:05:42.002 START TEST allowed 00:05:42.002 ************************************ 00:05:42.002 08:18:54 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:42.002 08:18:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:05:42.002 08:18:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:42.002 08:18:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:05:42.002 08:18:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.002 08:18:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:45.296 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:45.296 08:18:57 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:45.296 08:18:57 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:45.296 08:18:57 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:45.296 08:18:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:45.296 08:18:57 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.204 00:05:47.204 real 0m5.358s 00:05:47.204 user 0m1.495s 00:05:47.204 sys 0m2.739s 00:05:47.204 08:18:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.205 08:18:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:47.205 ************************************ 00:05:47.205 END TEST allowed 00:05:47.205 ************************************ 00:05:47.205 08:18:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:47.205 00:05:47.205 real 0m15.399s 00:05:47.205 user 0m4.917s 00:05:47.205 sys 0m8.548s 00:05:47.205 08:18:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.205 08:18:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:47.205 ************************************ 00:05:47.205 END TEST acl 00:05:47.205 ************************************ 00:05:47.205 08:18:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:47.205 08:18:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:47.205 08:18:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.205 08:18:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.205 08:18:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:47.205 ************************************ 00:05:47.205 START TEST hugepages 00:05:47.205 ************************************ 00:05:47.205 08:18:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:47.205 * Looking for test storage... 00:05:47.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 27061624 kB' 'MemAvailable: 30996684 kB' 'Buffers: 2704 kB' 'Cached: 10116916 kB' 'SwapCached: 0 kB' 'Active: 6922928 kB' 'Inactive: 3677088 kB' 'Active(anon): 6527288 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483708 kB' 'Mapped: 224112 kB' 'Shmem: 6046892 kB' 'KReclaimable: 408456 kB' 'Slab: 767760 kB' 'SReclaimable: 408456 kB' 'SUnreclaim: 359304 kB' 'KernelStack: 12896 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304776 kB' 'Committed_AS: 7631272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.205 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.206 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.465 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.465 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.465 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.465 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.465 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.465 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:47.466 08:18:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:47.466 08:18:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.466 08:18:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.466 08:18:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:47.466 ************************************ 00:05:47.466 START TEST default_setup 00:05:47.466 ************************************ 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.467 08:18:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:49.370 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.370 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.370 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.370 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.370 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.370 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.370 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.370 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:49.370 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.370 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.370 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.630 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.630 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.630 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.630 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.630 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:50.572 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29154784 kB' 'MemAvailable: 33090044 kB' 'Buffers: 2704 kB' 'Cached: 10117008 kB' 'SwapCached: 0 kB' 'Active: 6940580 kB' 'Inactive: 3677088 kB' 'Active(anon): 6544940 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501148 kB' 'Mapped: 224236 kB' 'Shmem: 6046984 kB' 'KReclaimable: 408656 kB' 'Slab: 767280 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358624 kB' 'KernelStack: 12752 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7651776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.572 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.573 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29156236 kB' 'MemAvailable: 33091496 kB' 'Buffers: 2704 kB' 'Cached: 10117008 kB' 'SwapCached: 0 kB' 'Active: 6940124 kB' 'Inactive: 3677088 kB' 'Active(anon): 6544484 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500556 kB' 'Mapped: 224276 kB' 'Shmem: 6046984 kB' 'KReclaimable: 408656 kB' 'Slab: 767264 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358608 kB' 'KernelStack: 12496 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7649188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.574 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.575 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29156916 kB' 'MemAvailable: 33092176 kB' 'Buffers: 2704 kB' 'Cached: 10117028 kB' 'SwapCached: 0 kB' 'Active: 6939940 kB' 'Inactive: 3677088 kB' 'Active(anon): 6544300 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500568 kB' 'Mapped: 224148 kB' 'Shmem: 6047004 kB' 'KReclaimable: 408656 kB' 'Slab: 767280 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358624 kB' 'KernelStack: 12480 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7649208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.576 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.577 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.578 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:50.840 nr_hugepages=1024 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:50.840 resv_hugepages=0 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:50.840 surplus_hugepages=0 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:50.840 anon_hugepages=0 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29159312 kB' 'MemAvailable: 33094572 kB' 'Buffers: 2704 kB' 'Cached: 10117032 kB' 'SwapCached: 0 kB' 'Active: 6939668 kB' 'Inactive: 3677088 kB' 'Active(anon): 6544028 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500296 kB' 'Mapped: 224148 kB' 'Shmem: 6047008 kB' 'KReclaimable: 408656 kB' 'Slab: 767280 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358624 kB' 'KernelStack: 12480 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7649228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.840 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.841 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 12720012 kB' 'MemUsed: 11852344 kB' 'SwapCached: 0 kB' 'Active: 5428700 kB' 'Inactive: 3286624 kB' 'Active(anon): 5293528 kB' 'Inactive(anon): 0 kB' 'Active(file): 135172 kB' 'Inactive(file): 3286624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8550028 kB' 'Mapped: 134952 kB' 'AnonPages: 168456 kB' 'Shmem: 5128232 kB' 'KernelStack: 6952 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272732 kB' 'Slab: 462788 kB' 'SReclaimable: 272732 kB' 'SUnreclaim: 190056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.842 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:50.843 node0=1024 expecting 1024 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:50.843 00:05:50.843 real 0m3.349s 00:05:50.843 user 0m1.067s 00:05:50.843 sys 0m1.431s 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.843 08:19:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:50.843 ************************************ 00:05:50.843 END TEST default_setup 00:05:50.843 ************************************ 00:05:50.843 08:19:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:50.843 08:19:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:50.843 08:19:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.843 08:19:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.843 08:19:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:50.843 ************************************ 00:05:50.843 START TEST per_node_1G_alloc 00:05:50.843 ************************************ 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.843 08:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:52.750 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:52.750 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:52.750 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:52.750 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:52.750 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:52.750 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:52.750 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:52.750 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:52.750 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:52.750 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:52.750 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:52.750 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:52.750 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:52.750 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:52.750 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:52.750 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:52.750 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29153896 kB' 'MemAvailable: 33089156 kB' 'Buffers: 2704 kB' 'Cached: 10117128 kB' 'SwapCached: 0 kB' 'Active: 6940232 kB' 'Inactive: 3677088 kB' 'Active(anon): 6544592 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500668 kB' 'Mapped: 224224 kB' 'Shmem: 6047104 kB' 'KReclaimable: 408656 kB' 'Slab: 767236 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358580 kB' 'KernelStack: 12512 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7651788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.750 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.751 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29154260 kB' 'MemAvailable: 33089520 kB' 'Buffers: 2704 kB' 'Cached: 10117132 kB' 'SwapCached: 0 kB' 'Active: 6940612 kB' 'Inactive: 3677088 kB' 'Active(anon): 6544972 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501136 kB' 'Mapped: 224208 kB' 'Shmem: 6047108 kB' 'KReclaimable: 408656 kB' 'Slab: 767264 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358608 kB' 'KernelStack: 12576 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7650556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.752 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.016 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:53.017 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29156652 kB' 'MemAvailable: 33091912 kB' 'Buffers: 2704 kB' 'Cached: 10117148 kB' 'SwapCached: 0 kB' 'Active: 6941672 kB' 'Inactive: 3677088 kB' 'Active(anon): 6546032 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502140 kB' 'Mapped: 224208 kB' 'Shmem: 6047124 kB' 'KReclaimable: 408656 kB' 'Slab: 767264 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358608 kB' 'KernelStack: 12864 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7652064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.018 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:53.019 nr_hugepages=1024 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:53.019 resv_hugepages=0 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:53.019 surplus_hugepages=0 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:53.019 anon_hugepages=0 00:05:53.019 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29158668 kB' 'MemAvailable: 33093928 kB' 'Buffers: 2704 kB' 'Cached: 10117172 kB' 'SwapCached: 0 kB' 'Active: 6942572 kB' 'Inactive: 3677088 kB' 'Active(anon): 6546932 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503012 kB' 'Mapped: 224208 kB' 'Shmem: 6047148 kB' 'KReclaimable: 408656 kB' 'Slab: 767264 kB' 'SReclaimable: 408656 kB' 'SUnreclaim: 358608 kB' 'KernelStack: 12848 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7650600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.020 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.021 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13778736 kB' 'MemUsed: 10793620 kB' 'SwapCached: 0 kB' 'Active: 5429624 kB' 'Inactive: 3286624 kB' 'Active(anon): 5294452 kB' 'Inactive(anon): 0 kB' 'Active(file): 135172 kB' 'Inactive(file): 3286624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8550040 kB' 'Mapped: 134964 kB' 'AnonPages: 169344 kB' 'Shmem: 5128244 kB' 'KernelStack: 6968 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272732 kB' 'Slab: 462812 kB' 'SReclaimable: 272732 kB' 'SUnreclaim: 190080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.022 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.023 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 15383904 kB' 'MemUsed: 4070388 kB' 'SwapCached: 0 kB' 'Active: 1510656 kB' 'Inactive: 390464 kB' 'Active(anon): 1250188 kB' 'Inactive(anon): 0 kB' 'Active(file): 260468 kB' 'Inactive(file): 390464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1569896 kB' 'Mapped: 89196 kB' 'AnonPages: 331344 kB' 'Shmem: 918964 kB' 'KernelStack: 5512 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135924 kB' 'Slab: 304420 kB' 'SReclaimable: 135924 kB' 'SUnreclaim: 168496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.024 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:53.025 node0=512 expecting 512 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:53.025 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:53.285 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:53.285 node1=512 expecting 512 00:05:53.285 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:53.285 00:05:53.285 real 0m2.277s 00:05:53.285 user 0m0.989s 00:05:53.285 sys 0m1.278s 00:05:53.285 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.285 08:19:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:53.285 ************************************ 00:05:53.285 END TEST per_node_1G_alloc 00:05:53.285 ************************************ 00:05:53.285 08:19:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:53.285 08:19:05 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:53.285 08:19:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.285 08:19:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.285 08:19:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:53.285 ************************************ 00:05:53.285 START TEST even_2G_alloc 00:05:53.285 ************************************ 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.285 08:19:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:54.663 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:54.663 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:54.924 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:54.924 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:54.924 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:54.924 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:54.924 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:54.924 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:54.924 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:54.924 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:54.924 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:54.924 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:54.924 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:54.924 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:54.924 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:54.924 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:54.924 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29158152 kB' 'MemAvailable: 33093372 kB' 'Buffers: 2704 kB' 'Cached: 10125448 kB' 'SwapCached: 0 kB' 'Active: 6949496 kB' 'Inactive: 3677088 kB' 'Active(anon): 6553856 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501620 kB' 'Mapped: 223680 kB' 'Shmem: 6055424 kB' 'KReclaimable: 408616 kB' 'Slab: 767084 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358468 kB' 'KernelStack: 12368 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7648628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.924 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:54.925 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.190 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.190 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.190 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29159076 kB' 'MemAvailable: 33094296 kB' 'Buffers: 2704 kB' 'Cached: 10125456 kB' 'SwapCached: 0 kB' 'Active: 6946176 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550536 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498372 kB' 'Mapped: 223936 kB' 'Shmem: 6055432 kB' 'KReclaimable: 408616 kB' 'Slab: 767084 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358468 kB' 'KernelStack: 12448 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.191 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.192 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29156268 kB' 'MemAvailable: 33091488 kB' 'Buffers: 2704 kB' 'Cached: 10125476 kB' 'SwapCached: 0 kB' 'Active: 6949548 kB' 'Inactive: 3677088 kB' 'Active(anon): 6553908 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501772 kB' 'Mapped: 223468 kB' 'Shmem: 6055452 kB' 'KReclaimable: 408616 kB' 'Slab: 767108 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358492 kB' 'KernelStack: 12448 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7648904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.193 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.194 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:55.195 nr_hugepages=1024 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:55.195 resv_hugepages=0 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:55.195 surplus_hugepages=0 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:55.195 anon_hugepages=0 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29155388 kB' 'MemAvailable: 33090608 kB' 'Buffers: 2704 kB' 'Cached: 10125496 kB' 'SwapCached: 0 kB' 'Active: 6945308 kB' 'Inactive: 3677088 kB' 'Active(anon): 6549668 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497516 kB' 'Mapped: 223452 kB' 'Shmem: 6055472 kB' 'KReclaimable: 408616 kB' 'Slab: 767108 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358492 kB' 'KernelStack: 12432 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7644140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.195 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.196 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13772572 kB' 'MemUsed: 10799784 kB' 'SwapCached: 0 kB' 'Active: 5428872 kB' 'Inactive: 3286624 kB' 'Active(anon): 5293700 kB' 'Inactive(anon): 0 kB' 'Active(file): 135172 kB' 'Inactive(file): 3286624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8550048 kB' 'Mapped: 134200 kB' 'AnonPages: 168708 kB' 'Shmem: 5128252 kB' 'KernelStack: 6936 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272724 kB' 'Slab: 462640 kB' 'SReclaimable: 272724 kB' 'SUnreclaim: 189916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.197 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.198 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 15384424 kB' 'MemUsed: 4069868 kB' 'SwapCached: 0 kB' 'Active: 1516760 kB' 'Inactive: 390464 kB' 'Active(anon): 1256292 kB' 'Inactive(anon): 0 kB' 'Active(file): 260468 kB' 'Inactive(file): 390464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1578192 kB' 'Mapped: 88884 kB' 'AnonPages: 329396 kB' 'Shmem: 927260 kB' 'KernelStack: 5592 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135892 kB' 'Slab: 304468 kB' 'SReclaimable: 135892 kB' 'SUnreclaim: 168576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.199 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.459 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:55.460 node0=512 expecting 512 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:55.460 node1=512 expecting 512 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:55.460 00:05:55.460 real 0m2.107s 00:05:55.460 user 0m0.854s 00:05:55.460 sys 0m1.240s 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.460 08:19:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:55.460 ************************************ 00:05:55.460 END TEST even_2G_alloc 00:05:55.460 ************************************ 00:05:55.460 08:19:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:55.460 08:19:07 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:55.460 08:19:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.460 08:19:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.460 08:19:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:55.460 ************************************ 00:05:55.460 START TEST odd_alloc 00:05:55.460 ************************************ 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.460 08:19:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:57.378 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:57.378 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:57.378 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:57.378 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:57.378 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:57.378 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:57.378 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:57.378 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:57.378 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:57.378 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:57.378 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:57.378 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:57.378 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:57.378 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:57.378 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:57.378 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:57.378 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29157652 kB' 'MemAvailable: 33092872 kB' 'Buffers: 2704 kB' 'Cached: 10125588 kB' 'SwapCached: 0 kB' 'Active: 6945652 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550012 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497708 kB' 'Mapped: 223180 kB' 'Shmem: 6055564 kB' 'KReclaimable: 408616 kB' 'Slab: 766980 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358364 kB' 'KernelStack: 12400 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 7644132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.378 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.379 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29158188 kB' 'MemAvailable: 33093408 kB' 'Buffers: 2704 kB' 'Cached: 10125604 kB' 'SwapCached: 0 kB' 'Active: 6944816 kB' 'Inactive: 3677088 kB' 'Active(anon): 6549176 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496848 kB' 'Mapped: 223040 kB' 'Shmem: 6055580 kB' 'KReclaimable: 408616 kB' 'Slab: 766988 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358372 kB' 'KernelStack: 12400 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 7644152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.380 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.381 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29158188 kB' 'MemAvailable: 33093408 kB' 'Buffers: 2704 kB' 'Cached: 10125604 kB' 'SwapCached: 0 kB' 'Active: 6944984 kB' 'Inactive: 3677088 kB' 'Active(anon): 6549344 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497012 kB' 'Mapped: 223040 kB' 'Shmem: 6055580 kB' 'KReclaimable: 408616 kB' 'Slab: 766988 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358372 kB' 'KernelStack: 12400 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 7644176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.382 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.383 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.384 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:57.678 nr_hugepages=1025 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:57.678 resv_hugepages=0 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:57.678 surplus_hugepages=0 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:57.678 anon_hugepages=0 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29157468 kB' 'MemAvailable: 33092688 kB' 'Buffers: 2704 kB' 'Cached: 10125632 kB' 'SwapCached: 0 kB' 'Active: 6947492 kB' 'Inactive: 3677088 kB' 'Active(anon): 6551852 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499636 kB' 'Mapped: 223476 kB' 'Shmem: 6055608 kB' 'KReclaimable: 408616 kB' 'Slab: 766988 kB' 'SReclaimable: 408616 kB' 'SUnreclaim: 358372 kB' 'KernelStack: 12448 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352328 kB' 'Committed_AS: 7656972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.678 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.679 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13769112 kB' 'MemUsed: 10803244 kB' 'SwapCached: 0 kB' 'Active: 5430016 kB' 'Inactive: 3286624 kB' 'Active(anon): 5294844 kB' 'Inactive(anon): 0 kB' 'Active(file): 135172 kB' 'Inactive(file): 3286624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8550064 kB' 'Mapped: 134196 kB' 'AnonPages: 169828 kB' 'Shmem: 5128268 kB' 'KernelStack: 6936 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272724 kB' 'Slab: 462552 kB' 'SReclaimable: 272724 kB' 'SUnreclaim: 189828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.680 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 15386844 kB' 'MemUsed: 4067448 kB' 'SwapCached: 0 kB' 'Active: 1516704 kB' 'Inactive: 390464 kB' 'Active(anon): 1256236 kB' 'Inactive(anon): 0 kB' 'Active(file): 260468 kB' 'Inactive(file): 390464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1578316 kB' 'Mapped: 89528 kB' 'AnonPages: 329044 kB' 'Shmem: 927384 kB' 'KernelStack: 5528 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135892 kB' 'Slab: 304428 kB' 'SReclaimable: 135892 kB' 'SUnreclaim: 168536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.681 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.682 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:57.683 node0=512 expecting 513 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:57.683 node1=513 expecting 512 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:57.683 00:05:57.683 real 0m2.209s 00:05:57.683 user 0m0.939s 00:05:57.683 sys 0m1.257s 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.683 08:19:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:57.683 ************************************ 00:05:57.683 END TEST odd_alloc 00:05:57.683 ************************************ 00:05:57.683 08:19:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:57.683 08:19:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:57.683 08:19:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.683 08:19:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.683 08:19:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:57.683 ************************************ 00:05:57.683 START TEST custom_alloc 00:05:57.683 ************************************ 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:57.683 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:57.684 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:57.684 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:57.684 08:19:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:57.684 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.684 08:19:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:59.602 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:59.602 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:59.602 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:59.602 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:59.602 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:59.602 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:59.602 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:59.602 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:59.602 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:59.602 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:59.602 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:59.602 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:59.602 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:59.602 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:59.602 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:59.602 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:59.602 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28091444 kB' 'MemAvailable: 32026648 kB' 'Buffers: 2704 kB' 'Cached: 10125720 kB' 'SwapCached: 0 kB' 'Active: 6946448 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550808 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498352 kB' 'Mapped: 223192 kB' 'Shmem: 6055696 kB' 'KReclaimable: 408600 kB' 'Slab: 767108 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358508 kB' 'KernelStack: 12496 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 7644744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.602 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.603 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28092796 kB' 'MemAvailable: 32028000 kB' 'Buffers: 2704 kB' 'Cached: 10125724 kB' 'SwapCached: 0 kB' 'Active: 6946256 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550616 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498176 kB' 'Mapped: 223052 kB' 'Shmem: 6055700 kB' 'KReclaimable: 408600 kB' 'Slab: 767084 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358484 kB' 'KernelStack: 12480 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 7644764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.604 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.605 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28093252 kB' 'MemAvailable: 32028456 kB' 'Buffers: 2704 kB' 'Cached: 10125724 kB' 'SwapCached: 0 kB' 'Active: 6945984 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550344 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497900 kB' 'Mapped: 223052 kB' 'Shmem: 6055700 kB' 'KReclaimable: 408600 kB' 'Slab: 767084 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358484 kB' 'KernelStack: 12480 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 7644784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.606 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.607 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:59.608 nr_hugepages=1536 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:59.608 resv_hugepages=0 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:59.608 surplus_hugepages=0 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:59.608 anon_hugepages=0 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 28092500 kB' 'MemAvailable: 32027704 kB' 'Buffers: 2704 kB' 'Cached: 10125760 kB' 'SwapCached: 0 kB' 'Active: 6946340 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550700 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498176 kB' 'Mapped: 223052 kB' 'Shmem: 6055736 kB' 'KReclaimable: 408600 kB' 'Slab: 767084 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358484 kB' 'KernelStack: 12480 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829064 kB' 'Committed_AS: 7644804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.872 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 13782376 kB' 'MemUsed: 10789980 kB' 'SwapCached: 0 kB' 'Active: 5429148 kB' 'Inactive: 3286624 kB' 'Active(anon): 5293976 kB' 'Inactive(anon): 0 kB' 'Active(file): 135172 kB' 'Inactive(file): 3286624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8550124 kB' 'Mapped: 134196 kB' 'AnonPages: 168772 kB' 'Shmem: 5128328 kB' 'KernelStack: 6888 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272724 kB' 'Slab: 462488 kB' 'SReclaimable: 272724 kB' 'SUnreclaim: 189764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.873 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.874 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454292 kB' 'MemFree: 14320740 kB' 'MemUsed: 5133552 kB' 'SwapCached: 0 kB' 'Active: 1516756 kB' 'Inactive: 390464 kB' 'Active(anon): 1256288 kB' 'Inactive(anon): 0 kB' 'Active(file): 260468 kB' 'Inactive(file): 390464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1578344 kB' 'Mapped: 88856 kB' 'AnonPages: 328964 kB' 'Shmem: 927412 kB' 'KernelStack: 5544 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135876 kB' 'Slab: 304596 kB' 'SReclaimable: 135876 kB' 'SUnreclaim: 168720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.875 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:59.876 node0=512 expecting 512 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:59.876 node1=1024 expecting 1024 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:59.876 00:05:59.876 real 0m2.166s 00:05:59.876 user 0m0.896s 00:05:59.876 sys 0m1.256s 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.876 08:19:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:59.876 ************************************ 00:05:59.876 END TEST custom_alloc 00:05:59.876 ************************************ 00:05:59.876 08:19:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:59.876 08:19:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:59.876 08:19:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.876 08:19:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.876 08:19:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:59.876 ************************************ 00:05:59.876 START TEST no_shrink_alloc 00:05:59.876 ************************************ 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.876 08:19:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:01.786 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:01.786 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:01.786 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:01.786 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:01.786 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:01.786 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:01.786 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:01.786 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:01.786 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:01.786 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:01.786 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:01.786 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:01.786 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:01.786 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:01.786 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:01.786 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:01.786 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:01.786 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:01.786 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:01.786 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:01.786 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.051 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29122180 kB' 'MemAvailable: 33057384 kB' 'Buffers: 2704 kB' 'Cached: 10125852 kB' 'SwapCached: 0 kB' 'Active: 6946516 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550876 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498292 kB' 'Mapped: 223084 kB' 'Shmem: 6055828 kB' 'KReclaimable: 408600 kB' 'Slab: 767008 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358408 kB' 'KernelStack: 12528 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.052 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29122908 kB' 'MemAvailable: 33058112 kB' 'Buffers: 2704 kB' 'Cached: 10125852 kB' 'SwapCached: 0 kB' 'Active: 6946624 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550984 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498352 kB' 'Mapped: 223068 kB' 'Shmem: 6055828 kB' 'KReclaimable: 408600 kB' 'Slab: 767044 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358444 kB' 'KernelStack: 12512 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.053 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.054 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29122908 kB' 'MemAvailable: 33058112 kB' 'Buffers: 2704 kB' 'Cached: 10125872 kB' 'SwapCached: 0 kB' 'Active: 6946656 kB' 'Inactive: 3677088 kB' 'Active(anon): 6551016 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498356 kB' 'Mapped: 223068 kB' 'Shmem: 6055848 kB' 'KReclaimable: 408600 kB' 'Slab: 767044 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358444 kB' 'KernelStack: 12512 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.055 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.056 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:02.057 nr_hugepages=1024 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:02.057 resv_hugepages=0 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:02.057 surplus_hugepages=0 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:02.057 anon_hugepages=0 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.057 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29123224 kB' 'MemAvailable: 33058428 kB' 'Buffers: 2704 kB' 'Cached: 10125912 kB' 'SwapCached: 0 kB' 'Active: 6946308 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550668 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498004 kB' 'Mapped: 223068 kB' 'Shmem: 6055888 kB' 'KReclaimable: 408600 kB' 'Slab: 767044 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358444 kB' 'KernelStack: 12512 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.058 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.059 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 12737588 kB' 'MemUsed: 11834768 kB' 'SwapCached: 0 kB' 'Active: 5429804 kB' 'Inactive: 3286624 kB' 'Active(anon): 5294632 kB' 'Inactive(anon): 0 kB' 'Active(file): 135172 kB' 'Inactive(file): 3286624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8550200 kB' 'Mapped: 134196 kB' 'AnonPages: 169340 kB' 'Shmem: 5128404 kB' 'KernelStack: 6952 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272724 kB' 'Slab: 462536 kB' 'SReclaimable: 272724 kB' 'SUnreclaim: 189812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.060 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.321 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:02.322 node0=1024 expecting 1024 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.322 08:19:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:04.239 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:04.239 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:04.239 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:04.239 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:04.239 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:04.239 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:04.239 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:04.239 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:04.239 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:04.239 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:06:04.239 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:06:04.239 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:06:04.239 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:06:04.239 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:06:04.239 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:06:04.239 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:06:04.239 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:06:04.239 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29110140 kB' 'MemAvailable: 33045344 kB' 'Buffers: 2704 kB' 'Cached: 10125972 kB' 'SwapCached: 0 kB' 'Active: 6947232 kB' 'Inactive: 3677088 kB' 'Active(anon): 6551592 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498840 kB' 'Mapped: 223080 kB' 'Shmem: 6055948 kB' 'KReclaimable: 408600 kB' 'Slab: 767384 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358784 kB' 'KernelStack: 12512 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.239 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.240 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29111000 kB' 'MemAvailable: 33046204 kB' 'Buffers: 2704 kB' 'Cached: 10125972 kB' 'SwapCached: 0 kB' 'Active: 6946648 kB' 'Inactive: 3677088 kB' 'Active(anon): 6551008 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498292 kB' 'Mapped: 223080 kB' 'Shmem: 6055948 kB' 'KReclaimable: 408600 kB' 'Slab: 767348 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358748 kB' 'KernelStack: 12496 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.241 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.242 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29110964 kB' 'MemAvailable: 33046168 kB' 'Buffers: 2704 kB' 'Cached: 10125996 kB' 'SwapCached: 0 kB' 'Active: 6946932 kB' 'Inactive: 3677088 kB' 'Active(anon): 6551292 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498532 kB' 'Mapped: 223080 kB' 'Shmem: 6055972 kB' 'KReclaimable: 408600 kB' 'Slab: 767348 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358748 kB' 'KernelStack: 12512 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.243 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.244 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:04.245 nr_hugepages=1024 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:04.245 resv_hugepages=0 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:04.245 surplus_hugepages=0 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:04.245 anon_hugepages=0 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026648 kB' 'MemFree: 29111932 kB' 'MemAvailable: 33047136 kB' 'Buffers: 2704 kB' 'Cached: 10126036 kB' 'SwapCached: 0 kB' 'Active: 6946588 kB' 'Inactive: 3677088 kB' 'Active(anon): 6550948 kB' 'Inactive(anon): 0 kB' 'Active(file): 395640 kB' 'Inactive(file): 3677088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498144 kB' 'Mapped: 223080 kB' 'Shmem: 6056012 kB' 'KReclaimable: 408600 kB' 'Slab: 767348 kB' 'SReclaimable: 408600 kB' 'SUnreclaim: 358748 kB' 'KernelStack: 12496 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353352 kB' 'Committed_AS: 7645644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1619548 kB' 'DirectMap2M: 11931648 kB' 'DirectMap1G: 38797312 kB' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.245 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.246 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 12727656 kB' 'MemUsed: 11844700 kB' 'SwapCached: 0 kB' 'Active: 5430584 kB' 'Inactive: 3286624 kB' 'Active(anon): 5295412 kB' 'Inactive(anon): 0 kB' 'Active(file): 135172 kB' 'Inactive(file): 3286624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8550216 kB' 'Mapped: 134196 kB' 'AnonPages: 170280 kB' 'Shmem: 5128420 kB' 'KernelStack: 7000 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272724 kB' 'Slab: 462712 kB' 'SReclaimable: 272724 kB' 'SUnreclaim: 189988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.247 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:04.508 node0=1024 expecting 1024 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:04.508 00:06:04.508 real 0m4.462s 00:06:04.508 user 0m1.774s 00:06:04.508 sys 0m2.662s 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.508 08:19:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:04.508 ************************************ 00:06:04.508 END TEST no_shrink_alloc 00:06:04.508 ************************************ 00:06:04.508 08:19:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.508 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:04.509 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.509 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.509 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:04.509 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:04.509 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:04.509 08:19:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:04.509 00:06:04.509 real 0m17.205s 00:06:04.509 user 0m6.773s 00:06:04.509 sys 0m9.537s 00:06:04.509 08:19:16 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.509 08:19:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:04.509 ************************************ 00:06:04.509 END TEST hugepages 00:06:04.509 ************************************ 00:06:04.509 08:19:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:04.509 08:19:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:04.509 08:19:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.509 08:19:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.509 08:19:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:04.509 ************************************ 00:06:04.509 START TEST driver 00:06:04.509 ************************************ 00:06:04.509 08:19:16 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:04.509 * Looking for test storage... 00:06:04.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:04.509 08:19:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:04.509 08:19:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:04.509 08:19:16 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:07.802 08:19:20 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:07.802 08:19:20 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.802 08:19:20 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.802 08:19:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:08.061 ************************************ 00:06:08.061 START TEST guess_driver 00:06:08.061 ************************************ 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:08.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:08.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:08.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:08.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:08.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:08.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:08.061 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:08.061 Looking for driver=vfio-pci 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.061 08:19:20 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:09.967 08:19:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:10.916 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:10.916 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:10.916 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:10.916 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:10.916 08:19:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:10.916 08:19:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:10.916 08:19:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:14.215 00:06:14.215 real 0m6.345s 00:06:14.215 user 0m1.562s 00:06:14.215 sys 0m2.869s 00:06:14.215 08:19:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.215 08:19:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:14.215 ************************************ 00:06:14.215 END TEST guess_driver 00:06:14.215 ************************************ 00:06:14.215 08:19:26 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:06:14.215 00:06:14.215 real 0m9.828s 00:06:14.215 user 0m2.347s 00:06:14.215 sys 0m4.543s 00:06:14.215 08:19:26 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.215 08:19:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:14.215 ************************************ 00:06:14.215 END TEST driver 00:06:14.215 ************************************ 00:06:14.475 08:19:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:14.475 08:19:26 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:14.475 08:19:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.475 08:19:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.475 08:19:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:14.475 ************************************ 00:06:14.475 START TEST devices 00:06:14.475 ************************************ 00:06:14.475 08:19:26 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:14.475 * Looking for test storage... 00:06:14.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:14.475 08:19:26 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:14.475 08:19:26 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:14.475 08:19:26 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:14.475 08:19:26 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:17.017 08:19:29 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:17.017 No valid GPT data, bailing 00:06:17.017 08:19:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:17.017 08:19:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:17.017 08:19:29 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:17.017 08:19:29 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.017 08:19:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:17.017 ************************************ 00:06:17.017 START TEST nvme_mount 00:06:17.017 ************************************ 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:17.017 08:19:29 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:17.957 Creating new GPT entries in memory. 00:06:17.957 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:17.957 other utilities. 00:06:17.957 08:19:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:17.957 08:19:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:17.957 08:19:30 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:17.957 08:19:30 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:17.957 08:19:30 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:18.896 Creating new GPT entries in memory. 00:06:18.896 The operation has completed successfully. 00:06:18.896 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:18.896 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:18.896 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2168227 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.157 08:19:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:21.065 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:21.065 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:21.325 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:21.325 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:21.325 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:21.325 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:21.325 08:19:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.231 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:23.232 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.491 08:19:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:25.398 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:25.398 00:06:25.398 real 0m8.430s 00:06:25.398 user 0m2.193s 00:06:25.398 sys 0m3.886s 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.398 08:19:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:25.398 ************************************ 00:06:25.398 END TEST nvme_mount 00:06:25.398 ************************************ 00:06:25.398 08:19:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:25.398 08:19:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:25.398 08:19:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.398 08:19:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.398 08:19:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:25.398 ************************************ 00:06:25.398 START TEST dm_mount 00:06:25.398 ************************************ 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:25.398 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:25.399 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.399 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:25.399 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:25.399 08:19:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:26.779 Creating new GPT entries in memory. 00:06:26.779 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:26.779 other utilities. 00:06:26.779 08:19:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:26.779 08:19:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:26.779 08:19:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:26.779 08:19:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:26.779 08:19:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:27.720 Creating new GPT entries in memory. 00:06:27.720 The operation has completed successfully. 00:06:27.720 08:19:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:27.720 08:19:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:27.720 08:19:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:27.720 08:19:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:27.720 08:19:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:28.660 The operation has completed successfully. 00:06:28.660 08:19:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:28.660 08:19:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:28.660 08:19:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2170785 00:06:28.660 08:19:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:28.660 08:19:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.660 08:19:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:28.660 08:19:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.660 08:19:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.571 08:19:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:32.481 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:32.481 00:06:32.481 real 0m7.046s 00:06:32.481 user 0m1.362s 00:06:32.481 sys 0m2.557s 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.481 08:19:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:32.481 ************************************ 00:06:32.481 END TEST dm_mount 00:06:32.481 ************************************ 00:06:32.481 08:19:44 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:32.481 08:19:44 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:32.481 08:19:44 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:32.481 08:19:44 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:32.481 08:19:44 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:32.481 08:19:44 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:32.481 08:19:44 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:32.481 08:19:44 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:32.741 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:32.741 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:32.741 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:32.741 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:32.741 08:19:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:32.741 08:19:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:32.741 08:19:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:32.741 08:19:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:32.741 08:19:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:32.741 08:19:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:32.741 08:19:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:32.741 00:06:32.741 real 0m18.425s 00:06:32.741 user 0m4.604s 00:06:32.741 sys 0m8.157s 00:06:32.741 08:19:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.741 08:19:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:32.741 ************************************ 00:06:32.741 END TEST devices 00:06:32.741 ************************************ 00:06:33.000 08:19:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:33.000 00:06:33.000 real 1m1.265s 00:06:33.000 user 0m18.795s 00:06:33.000 sys 0m31.062s 00:06:33.000 08:19:45 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.000 08:19:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:33.000 ************************************ 00:06:33.000 END TEST setup.sh 00:06:33.000 ************************************ 00:06:33.000 08:19:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.000 08:19:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:34.941 Hugepages 00:06:34.941 node hugesize free / total 00:06:34.941 node0 1048576kB 0 / 0 00:06:34.941 node0 2048kB 2048 / 2048 00:06:34.941 node1 1048576kB 0 / 0 00:06:34.941 node1 2048kB 0 / 0 00:06:34.941 00:06:34.941 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:34.941 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:34.941 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:34.941 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:34.941 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:34.941 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:34.941 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:34.941 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:34.941 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:34.941 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:34.941 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:34.941 08:19:47 -- spdk/autotest.sh@130 -- # uname -s 00:06:34.941 08:19:47 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:34.941 08:19:47 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:34.941 08:19:47 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:36.848 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:36.848 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:36.848 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:36.848 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:36.848 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:36.848 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:37.108 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:37.108 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:37.108 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:38.047 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:38.047 08:19:50 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:39.428 08:19:51 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:39.429 08:19:51 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:39.429 08:19:51 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:39.429 08:19:51 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:39.429 08:19:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:39.429 08:19:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:39.429 08:19:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:39.429 08:19:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:39.429 08:19:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:39.429 08:19:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:39.429 08:19:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:06:39.429 08:19:51 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:40.809 Waiting for block devices as requested 00:06:40.809 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:06:41.068 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:41.068 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:41.327 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:41.327 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:41.327 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:41.587 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:41.587 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:41.587 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:41.847 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:41.847 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:41.847 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:42.107 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:42.107 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:42.107 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:42.107 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:42.366 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:42.366 08:19:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:42.366 08:19:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:06:42.366 08:19:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:06:42.366 08:19:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:42.366 08:19:54 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:42.366 08:19:54 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:42.366 08:19:54 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:06:42.366 08:19:54 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:42.366 08:19:54 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:42.366 08:19:54 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:42.366 08:19:54 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:42.366 08:19:54 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:42.366 08:19:54 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:42.366 08:19:54 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:42.366 08:19:54 -- common/autotest_common.sh@1557 -- # continue 00:06:42.366 08:19:54 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:42.366 08:19:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.366 08:19:54 -- common/autotest_common.sh@10 -- # set +x 00:06:42.625 08:19:54 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:42.625 08:19:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.625 08:19:54 -- common/autotest_common.sh@10 -- # set +x 00:06:42.625 08:19:54 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:44.532 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:44.532 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:44.532 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:44.532 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:44.532 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:44.532 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:44.532 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:44.532 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:44.532 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:44.532 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:44.532 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:44.533 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:44.533 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:44.533 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:44.533 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:44.533 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:45.472 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:06:45.472 08:19:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:45.472 08:19:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.472 08:19:57 -- common/autotest_common.sh@10 -- # set +x 00:06:45.472 08:19:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:45.472 08:19:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:45.472 08:19:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:45.472 08:19:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:45.472 08:19:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:45.472 08:19:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:45.472 08:19:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:45.472 08:19:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:45.472 08:19:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:45.472 08:19:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:45.472 08:19:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:45.732 08:19:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:45.732 08:19:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:06:45.732 08:19:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:45.732 08:19:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:06:45.732 08:19:58 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:45.732 08:19:58 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:45.732 08:19:58 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:45.732 08:19:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:06:45.732 08:19:58 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:06:45.732 08:19:58 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2176529 00:06:45.732 08:19:58 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.732 08:19:58 -- common/autotest_common.sh@1598 -- # waitforlisten 2176529 00:06:45.732 08:19:58 -- common/autotest_common.sh@829 -- # '[' -z 2176529 ']' 00:06:45.732 08:19:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.732 08:19:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.732 08:19:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.732 08:19:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.732 08:19:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.732 [2024-07-23 08:19:58.209730] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:45.732 [2024-07-23 08:19:58.209911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176529 ] 00:06:45.992 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.992 [2024-07-23 08:19:58.416466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.561 [2024-07-23 08:19:58.915050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.098 08:20:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.098 08:20:01 -- common/autotest_common.sh@862 -- # return 0 00:06:49.098 08:20:01 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:49.098 08:20:01 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:49.098 08:20:01 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:06:52.390 nvme0n1 00:06:52.390 08:20:04 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:52.997 [2024-07-23 08:20:05.214403] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:52.997 [2024-07-23 08:20:05.214490] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:52.997 request: 00:06:52.997 { 00:06:52.997 "nvme_ctrlr_name": "nvme0", 00:06:52.997 "password": "test", 00:06:52.997 "method": "bdev_nvme_opal_revert", 00:06:52.997 "req_id": 1 00:06:52.997 } 00:06:52.997 Got JSON-RPC error response 00:06:52.997 response: 00:06:52.997 { 00:06:52.997 "code": -32603, 00:06:52.997 "message": "Internal error" 00:06:52.997 } 00:06:52.997 08:20:05 -- common/autotest_common.sh@1604 -- # true 00:06:52.997 08:20:05 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:52.997 08:20:05 -- common/autotest_common.sh@1608 -- # killprocess 2176529 00:06:52.997 08:20:05 -- common/autotest_common.sh@948 -- # '[' -z 2176529 ']' 00:06:52.997 08:20:05 -- common/autotest_common.sh@952 -- # kill -0 2176529 00:06:52.997 08:20:05 -- common/autotest_common.sh@953 -- # uname 00:06:52.997 08:20:05 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.997 08:20:05 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2176529 00:06:52.997 08:20:05 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.997 08:20:05 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.997 08:20:05 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2176529' 00:06:52.997 killing process with pid 2176529 00:06:52.997 08:20:05 -- common/autotest_common.sh@967 -- # kill 2176529 00:06:52.997 08:20:05 -- common/autotest_common.sh@972 -- # wait 2176529 00:06:59.577 08:20:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:59.577 08:20:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:59.577 08:20:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:59.577 08:20:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:59.577 08:20:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:59.577 08:20:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:59.577 08:20:11 -- common/autotest_common.sh@10 -- # set +x 00:06:59.577 08:20:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:59.577 08:20:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:59.577 08:20:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.577 08:20:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.577 08:20:11 -- common/autotest_common.sh@10 -- # set +x 00:06:59.577 ************************************ 00:06:59.577 START TEST env 00:06:59.577 ************************************ 00:06:59.577 08:20:11 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:59.578 * Looking for test storage... 00:06:59.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:59.578 08:20:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.578 08:20:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.578 08:20:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.578 08:20:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.578 ************************************ 00:06:59.578 START TEST env_memory 00:06:59.578 ************************************ 00:06:59.578 08:20:11 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.578 00:06:59.578 00:06:59.578 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.578 http://cunit.sourceforge.net/ 00:06:59.578 00:06:59.578 00:06:59.578 Suite: memory 00:06:59.578 Test: alloc and free memory map ...[2024-07-23 08:20:11.442559] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:59.578 passed 00:06:59.578 Test: mem map translation ...[2024-07-23 08:20:11.495134] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:59.578 [2024-07-23 08:20:11.495188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:59.578 [2024-07-23 08:20:11.495280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:59.578 [2024-07-23 08:20:11.495327] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:59.578 passed 00:06:59.578 Test: mem map registration ...[2024-07-23 08:20:11.578169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:59.578 [2024-07-23 08:20:11.578223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:59.578 passed 00:06:59.578 Test: mem map adjacent registrations ...passed 00:06:59.578 00:06:59.578 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.578 suites 1 1 n/a 0 0 00:06:59.578 tests 4 4 4 0 0 00:06:59.578 asserts 152 152 152 0 n/a 00:06:59.578 00:06:59.578 Elapsed time = 0.291 seconds 00:06:59.578 00:06:59.578 real 0m0.317s 00:06:59.578 user 0m0.297s 00:06:59.578 sys 0m0.019s 00:06:59.578 08:20:11 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.578 08:20:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:59.578 ************************************ 00:06:59.578 END TEST env_memory 00:06:59.578 ************************************ 00:06:59.578 08:20:11 env -- common/autotest_common.sh@1142 -- # return 0 00:06:59.578 08:20:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:59.578 08:20:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.578 08:20:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.578 08:20:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.578 ************************************ 00:06:59.578 START TEST env_vtophys 00:06:59.578 ************************************ 00:06:59.578 08:20:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:59.578 EAL: lib.eal log level changed from notice to debug 00:06:59.578 EAL: Detected lcore 0 as core 0 on socket 0 00:06:59.578 EAL: Detected lcore 1 as core 1 on socket 0 00:06:59.578 EAL: Detected lcore 2 as core 2 on socket 0 00:06:59.578 EAL: Detected lcore 3 as core 3 on socket 0 00:06:59.578 EAL: Detected lcore 4 as core 4 on socket 0 00:06:59.578 EAL: Detected lcore 5 as core 5 on socket 0 00:06:59.578 EAL: Detected lcore 6 as core 8 on socket 0 00:06:59.578 EAL: Detected lcore 7 as core 9 on socket 0 00:06:59.578 EAL: Detected lcore 8 as core 10 on socket 0 00:06:59.578 EAL: Detected lcore 9 as core 11 on socket 0 00:06:59.578 EAL: Detected lcore 10 as core 12 on socket 0 00:06:59.578 EAL: Detected lcore 11 as core 13 on socket 0 00:06:59.578 EAL: Detected lcore 12 as core 0 on socket 1 00:06:59.578 EAL: Detected lcore 13 as core 1 on socket 1 00:06:59.578 EAL: Detected lcore 14 as core 2 on socket 1 00:06:59.578 EAL: Detected lcore 15 as core 3 on socket 1 00:06:59.578 EAL: Detected lcore 16 as core 4 on socket 1 00:06:59.578 EAL: Detected lcore 17 as core 5 on socket 1 00:06:59.578 EAL: Detected lcore 18 as core 8 on socket 1 00:06:59.578 EAL: Detected lcore 19 as core 9 on socket 1 00:06:59.578 EAL: Detected lcore 20 as core 10 on socket 1 00:06:59.578 EAL: Detected lcore 21 as core 11 on socket 1 00:06:59.578 EAL: Detected lcore 22 as core 12 on socket 1 00:06:59.578 EAL: Detected lcore 23 as core 13 on socket 1 00:06:59.578 EAL: Detected lcore 24 as core 0 on socket 0 00:06:59.578 EAL: Detected lcore 25 as core 1 on socket 0 00:06:59.578 EAL: Detected lcore 26 as core 2 on socket 0 00:06:59.578 EAL: Detected lcore 27 as core 3 on socket 0 00:06:59.578 EAL: Detected lcore 28 as core 4 on socket 0 00:06:59.578 EAL: Detected lcore 29 as core 5 on socket 0 00:06:59.578 EAL: Detected lcore 30 as core 8 on socket 0 00:06:59.578 EAL: Detected lcore 31 as core 9 on socket 0 00:06:59.578 EAL: Detected lcore 32 as core 10 on socket 0 00:06:59.578 EAL: Detected lcore 33 as core 11 on socket 0 00:06:59.578 EAL: Detected lcore 34 as core 12 on socket 0 00:06:59.578 EAL: Detected lcore 35 as core 13 on socket 0 00:06:59.578 EAL: Detected lcore 36 as core 0 on socket 1 00:06:59.578 EAL: Detected lcore 37 as core 1 on socket 1 00:06:59.578 EAL: Detected lcore 38 as core 2 on socket 1 00:06:59.578 EAL: Detected lcore 39 as core 3 on socket 1 00:06:59.578 EAL: Detected lcore 40 as core 4 on socket 1 00:06:59.578 EAL: Detected lcore 41 as core 5 on socket 1 00:06:59.578 EAL: Detected lcore 42 as core 8 on socket 1 00:06:59.578 EAL: Detected lcore 43 as core 9 on socket 1 00:06:59.578 EAL: Detected lcore 44 as core 10 on socket 1 00:06:59.578 EAL: Detected lcore 45 as core 11 on socket 1 00:06:59.578 EAL: Detected lcore 46 as core 12 on socket 1 00:06:59.578 EAL: Detected lcore 47 as core 13 on socket 1 00:06:59.578 EAL: Maximum logical cores by configuration: 128 00:06:59.578 EAL: Detected CPU lcores: 48 00:06:59.578 EAL: Detected NUMA nodes: 2 00:06:59.578 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:59.578 EAL: Detected shared linkage of DPDK 00:06:59.578 EAL: No shared files mode enabled, IPC will be disabled 00:06:59.578 EAL: Bus pci wants IOVA as 'DC' 00:06:59.578 EAL: Buses did not request a specific IOVA mode. 00:06:59.578 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:59.578 EAL: Selected IOVA mode 'VA' 00:06:59.578 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.578 EAL: Probing VFIO support... 00:06:59.578 EAL: IOMMU type 1 (Type 1) is supported 00:06:59.578 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:59.578 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:59.578 EAL: VFIO support initialized 00:06:59.578 EAL: Ask a virtual area of 0x2e000 bytes 00:06:59.578 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:59.578 EAL: Setting up physically contiguous memory... 00:06:59.578 EAL: Setting maximum number of open files to 524288 00:06:59.578 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:59.578 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:59.578 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:59.578 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.578 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:59.578 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.578 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.578 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:59.578 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:59.578 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.578 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:59.578 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.578 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.578 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:59.578 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:59.578 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.578 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:59.578 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.578 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.578 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:59.578 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:59.578 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.578 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:59.578 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:59.578 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.578 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:59.578 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:59.578 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:59.578 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.578 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:59.578 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.578 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.578 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:59.578 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:59.578 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.578 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:59.579 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.579 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.579 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:59.579 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:59.579 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.579 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:59.579 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.579 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.579 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:59.579 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:59.579 EAL: Ask a virtual area of 0x61000 bytes 00:06:59.579 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:59.579 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:59.579 EAL: Ask a virtual area of 0x400000000 bytes 00:06:59.579 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:59.579 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:59.579 EAL: Hugepages will be freed exactly as allocated. 00:06:59.579 EAL: No shared files mode enabled, IPC is disabled 00:06:59.579 EAL: No shared files mode enabled, IPC is disabled 00:06:59.579 EAL: TSC frequency is ~2700000 KHz 00:06:59.579 EAL: Main lcore 0 is ready (tid=7f41eeac5a40;cpuset=[0]) 00:06:59.579 EAL: Trying to obtain current memory policy. 00:06:59.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.579 EAL: Restoring previous memory policy: 0 00:06:59.579 EAL: request: mp_malloc_sync 00:06:59.579 EAL: No shared files mode enabled, IPC is disabled 00:06:59.579 EAL: Heap on socket 0 was expanded by 2MB 00:06:59.579 EAL: No shared files mode enabled, IPC is disabled 00:06:59.838 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:59.838 EAL: Mem event callback 'spdk:(nil)' registered 00:06:59.838 00:06:59.838 00:06:59.838 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.838 http://cunit.sourceforge.net/ 00:06:59.838 00:06:59.838 00:06:59.838 Suite: components_suite 00:07:00.406 Test: vtophys_malloc_test ...passed 00:07:00.406 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:00.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.406 EAL: Restoring previous memory policy: 4 00:07:00.406 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.406 EAL: request: mp_malloc_sync 00:07:00.406 EAL: No shared files mode enabled, IPC is disabled 00:07:00.406 EAL: Heap on socket 0 was expanded by 4MB 00:07:00.406 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.406 EAL: request: mp_malloc_sync 00:07:00.406 EAL: No shared files mode enabled, IPC is disabled 00:07:00.406 EAL: Heap on socket 0 was shrunk by 4MB 00:07:00.406 EAL: Trying to obtain current memory policy. 00:07:00.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.406 EAL: Restoring previous memory policy: 4 00:07:00.406 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.406 EAL: request: mp_malloc_sync 00:07:00.406 EAL: No shared files mode enabled, IPC is disabled 00:07:00.406 EAL: Heap on socket 0 was expanded by 6MB 00:07:00.406 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.406 EAL: request: mp_malloc_sync 00:07:00.406 EAL: No shared files mode enabled, IPC is disabled 00:07:00.406 EAL: Heap on socket 0 was shrunk by 6MB 00:07:00.406 EAL: Trying to obtain current memory policy. 00:07:00.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.406 EAL: Restoring previous memory policy: 4 00:07:00.406 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.406 EAL: request: mp_malloc_sync 00:07:00.406 EAL: No shared files mode enabled, IPC is disabled 00:07:00.406 EAL: Heap on socket 0 was expanded by 10MB 00:07:00.679 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.679 EAL: request: mp_malloc_sync 00:07:00.679 EAL: No shared files mode enabled, IPC is disabled 00:07:00.679 EAL: Heap on socket 0 was shrunk by 10MB 00:07:00.679 EAL: Trying to obtain current memory policy. 00:07:00.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.679 EAL: Restoring previous memory policy: 4 00:07:00.679 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.679 EAL: request: mp_malloc_sync 00:07:00.679 EAL: No shared files mode enabled, IPC is disabled 00:07:00.679 EAL: Heap on socket 0 was expanded by 18MB 00:07:00.679 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.679 EAL: request: mp_malloc_sync 00:07:00.679 EAL: No shared files mode enabled, IPC is disabled 00:07:00.679 EAL: Heap on socket 0 was shrunk by 18MB 00:07:00.679 EAL: Trying to obtain current memory policy. 00:07:00.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.679 EAL: Restoring previous memory policy: 4 00:07:00.679 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.679 EAL: request: mp_malloc_sync 00:07:00.679 EAL: No shared files mode enabled, IPC is disabled 00:07:00.679 EAL: Heap on socket 0 was expanded by 34MB 00:07:00.938 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.938 EAL: request: mp_malloc_sync 00:07:00.938 EAL: No shared files mode enabled, IPC is disabled 00:07:00.938 EAL: Heap on socket 0 was shrunk by 34MB 00:07:00.938 EAL: Trying to obtain current memory policy. 00:07:00.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.938 EAL: Restoring previous memory policy: 4 00:07:00.938 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.938 EAL: request: mp_malloc_sync 00:07:00.938 EAL: No shared files mode enabled, IPC is disabled 00:07:00.938 EAL: Heap on socket 0 was expanded by 66MB 00:07:01.198 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.198 EAL: request: mp_malloc_sync 00:07:01.198 EAL: No shared files mode enabled, IPC is disabled 00:07:01.198 EAL: Heap on socket 0 was shrunk by 66MB 00:07:01.457 EAL: Trying to obtain current memory policy. 00:07:01.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.457 EAL: Restoring previous memory policy: 4 00:07:01.457 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.457 EAL: request: mp_malloc_sync 00:07:01.457 EAL: No shared files mode enabled, IPC is disabled 00:07:01.457 EAL: Heap on socket 0 was expanded by 130MB 00:07:02.025 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.025 EAL: request: mp_malloc_sync 00:07:02.025 EAL: No shared files mode enabled, IPC is disabled 00:07:02.025 EAL: Heap on socket 0 was shrunk by 130MB 00:07:02.285 EAL: Trying to obtain current memory policy. 00:07:02.285 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.544 EAL: Restoring previous memory policy: 4 00:07:02.544 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.544 EAL: request: mp_malloc_sync 00:07:02.544 EAL: No shared files mode enabled, IPC is disabled 00:07:02.544 EAL: Heap on socket 0 was expanded by 258MB 00:07:03.481 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.481 EAL: request: mp_malloc_sync 00:07:03.481 EAL: No shared files mode enabled, IPC is disabled 00:07:03.481 EAL: Heap on socket 0 was shrunk by 258MB 00:07:04.418 EAL: Trying to obtain current memory policy. 00:07:04.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.677 EAL: Restoring previous memory policy: 4 00:07:04.677 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.677 EAL: request: mp_malloc_sync 00:07:04.677 EAL: No shared files mode enabled, IPC is disabled 00:07:04.677 EAL: Heap on socket 0 was expanded by 514MB 00:07:06.584 EAL: Calling mem event callback 'spdk:(nil)' 00:07:06.584 EAL: request: mp_malloc_sync 00:07:06.584 EAL: No shared files mode enabled, IPC is disabled 00:07:06.584 EAL: Heap on socket 0 was shrunk by 514MB 00:07:08.484 EAL: Trying to obtain current memory policy. 00:07:08.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.742 EAL: Restoring previous memory policy: 4 00:07:08.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.742 EAL: request: mp_malloc_sync 00:07:08.742 EAL: No shared files mode enabled, IPC is disabled 00:07:08.742 EAL: Heap on socket 0 was expanded by 1026MB 00:07:12.929 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.929 EAL: request: mp_malloc_sync 00:07:12.929 EAL: No shared files mode enabled, IPC is disabled 00:07:12.929 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:16.223 passed 00:07:16.223 00:07:16.223 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.223 suites 1 1 n/a 0 0 00:07:16.223 tests 2 2 2 0 0 00:07:16.223 asserts 497 497 497 0 n/a 00:07:16.223 00:07:16.223 Elapsed time = 15.910 seconds 00:07:16.223 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.223 EAL: request: mp_malloc_sync 00:07:16.223 EAL: No shared files mode enabled, IPC is disabled 00:07:16.223 EAL: Heap on socket 0 was shrunk by 2MB 00:07:16.223 EAL: No shared files mode enabled, IPC is disabled 00:07:16.223 EAL: No shared files mode enabled, IPC is disabled 00:07:16.223 EAL: No shared files mode enabled, IPC is disabled 00:07:16.223 00:07:16.223 real 0m16.511s 00:07:16.223 user 0m14.502s 00:07:16.223 sys 0m1.862s 00:07:16.223 08:20:28 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.223 08:20:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:16.223 ************************************ 00:07:16.223 END TEST env_vtophys 00:07:16.223 ************************************ 00:07:16.223 08:20:28 env -- common/autotest_common.sh@1142 -- # return 0 00:07:16.223 08:20:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:16.223 08:20:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.223 08:20:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.223 08:20:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.223 ************************************ 00:07:16.223 START TEST env_pci 00:07:16.223 ************************************ 00:07:16.223 08:20:28 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:16.223 00:07:16.223 00:07:16.223 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.223 http://cunit.sourceforge.net/ 00:07:16.223 00:07:16.223 00:07:16.223 Suite: pci 00:07:16.223 Test: pci_hook ...[2024-07-23 08:20:28.426916] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2179940 has claimed it 00:07:16.223 EAL: Cannot find device (10000:00:01.0) 00:07:16.223 EAL: Failed to attach device on primary process 00:07:16.223 passed 00:07:16.223 00:07:16.223 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.223 suites 1 1 n/a 0 0 00:07:16.223 tests 1 1 1 0 0 00:07:16.223 asserts 25 25 25 0 n/a 00:07:16.223 00:07:16.223 Elapsed time = 0.110 seconds 00:07:16.223 00:07:16.223 real 0m0.220s 00:07:16.223 user 0m0.084s 00:07:16.223 sys 0m0.134s 00:07:16.223 08:20:28 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.223 08:20:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:16.223 ************************************ 00:07:16.223 END TEST env_pci 00:07:16.223 ************************************ 00:07:16.223 08:20:28 env -- common/autotest_common.sh@1142 -- # return 0 00:07:16.223 08:20:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:16.223 08:20:28 env -- env/env.sh@15 -- # uname 00:07:16.223 08:20:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:16.223 08:20:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:16.223 08:20:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:16.223 08:20:28 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:16.223 08:20:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.223 08:20:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.223 ************************************ 00:07:16.223 START TEST env_dpdk_post_init 00:07:16.223 ************************************ 00:07:16.223 08:20:28 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:16.509 EAL: Detected CPU lcores: 48 00:07:16.509 EAL: Detected NUMA nodes: 2 00:07:16.509 EAL: Detected shared linkage of DPDK 00:07:16.509 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:16.509 EAL: Selected IOVA mode 'VA' 00:07:16.509 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.509 EAL: VFIO support initialized 00:07:16.509 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:16.773 EAL: Using IOMMU type 1 (Type 1) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:16.773 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:17.031 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:17.031 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:17.031 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:17.031 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:17.031 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:17.032 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:17.600 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:07:20.889 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:07:20.889 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:07:21.148 Starting DPDK initialization... 00:07:21.148 Starting SPDK post initialization... 00:07:21.148 SPDK NVMe probe 00:07:21.148 Attaching to 0000:82:00.0 00:07:21.148 Attached to 0000:82:00.0 00:07:21.148 Cleaning up... 00:07:21.148 00:07:21.148 real 0m4.930s 00:07:21.148 user 0m3.551s 00:07:21.148 sys 0m0.411s 00:07:21.148 08:20:33 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.148 08:20:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:21.148 ************************************ 00:07:21.148 END TEST env_dpdk_post_init 00:07:21.148 ************************************ 00:07:21.148 08:20:33 env -- common/autotest_common.sh@1142 -- # return 0 00:07:21.148 08:20:33 env -- env/env.sh@26 -- # uname 00:07:21.148 08:20:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:21.148 08:20:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:21.148 08:20:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.148 08:20:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.148 08:20:33 env -- common/autotest_common.sh@10 -- # set +x 00:07:21.408 ************************************ 00:07:21.408 START TEST env_mem_callbacks 00:07:21.408 ************************************ 00:07:21.408 08:20:33 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:21.408 EAL: Detected CPU lcores: 48 00:07:21.408 EAL: Detected NUMA nodes: 2 00:07:21.408 EAL: Detected shared linkage of DPDK 00:07:21.408 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:21.408 EAL: Selected IOVA mode 'VA' 00:07:21.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.408 EAL: VFIO support initialized 00:07:21.669 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:21.669 00:07:21.669 00:07:21.669 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.669 http://cunit.sourceforge.net/ 00:07:21.669 00:07:21.669 00:07:21.669 Suite: memory 00:07:21.669 Test: test ... 00:07:21.669 register 0x200000200000 2097152 00:07:21.669 malloc 3145728 00:07:21.669 register 0x200000400000 4194304 00:07:21.669 buf 0x2000004fffc0 len 3145728 PASSED 00:07:21.669 malloc 64 00:07:21.669 buf 0x2000004ffec0 len 64 PASSED 00:07:21.669 malloc 4194304 00:07:21.669 register 0x200000800000 6291456 00:07:21.669 buf 0x2000009fffc0 len 4194304 PASSED 00:07:21.669 free 0x2000004fffc0 3145728 00:07:21.669 free 0x2000004ffec0 64 00:07:21.669 unregister 0x200000400000 4194304 PASSED 00:07:21.669 free 0x2000009fffc0 4194304 00:07:21.669 unregister 0x200000800000 6291456 PASSED 00:07:21.669 malloc 8388608 00:07:21.669 register 0x200000400000 10485760 00:07:21.669 buf 0x2000005fffc0 len 8388608 PASSED 00:07:21.669 free 0x2000005fffc0 8388608 00:07:21.669 unregister 0x200000400000 10485760 PASSED 00:07:21.669 passed 00:07:21.669 00:07:21.669 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.669 suites 1 1 n/a 0 0 00:07:21.669 tests 1 1 1 0 0 00:07:21.669 asserts 15 15 15 0 n/a 00:07:21.669 00:07:21.669 Elapsed time = 0.119 seconds 00:07:21.669 00:07:21.669 real 0m0.400s 00:07:21.669 user 0m0.215s 00:07:21.669 sys 0m0.180s 00:07:21.669 08:20:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.669 08:20:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:21.669 ************************************ 00:07:21.669 END TEST env_mem_callbacks 00:07:21.669 ************************************ 00:07:21.669 08:20:34 env -- common/autotest_common.sh@1142 -- # return 0 00:07:21.669 00:07:21.669 real 0m22.858s 00:07:21.669 user 0m18.858s 00:07:21.669 sys 0m2.905s 00:07:21.669 08:20:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.669 08:20:34 env -- common/autotest_common.sh@10 -- # set +x 00:07:21.669 ************************************ 00:07:21.669 END TEST env 00:07:21.669 ************************************ 00:07:21.669 08:20:34 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.669 08:20:34 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:21.669 08:20:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.669 08:20:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.670 08:20:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.930 ************************************ 00:07:21.930 START TEST rpc 00:07:21.930 ************************************ 00:07:21.930 08:20:34 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:21.930 * Looking for test storage... 00:07:21.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:21.930 08:20:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2180730 00:07:21.930 08:20:34 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:21.930 08:20:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:21.930 08:20:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2180730 00:07:21.930 08:20:34 rpc -- common/autotest_common.sh@829 -- # '[' -z 2180730 ']' 00:07:21.930 08:20:34 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.930 08:20:34 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.930 08:20:34 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.930 08:20:34 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.930 08:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.189 [2024-07-23 08:20:34.471657] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:22.189 [2024-07-23 08:20:34.471895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180730 ] 00:07:22.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.448 [2024-07-23 08:20:34.770535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.017 [2024-07-23 08:20:35.247522] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:23.017 [2024-07-23 08:20:35.247662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2180730' to capture a snapshot of events at runtime. 00:07:23.017 [2024-07-23 08:20:35.247724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.017 [2024-07-23 08:20:35.247793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.017 [2024-07-23 08:20:35.247834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2180730 for offline analysis/debug. 00:07:23.017 [2024-07-23 08:20:35.247956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.396 08:20:36 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.396 08:20:36 rpc -- common/autotest_common.sh@862 -- # return 0 00:07:24.396 08:20:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:24.396 08:20:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:24.396 08:20:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:24.396 08:20:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:24.396 08:20:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.396 08:20:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.396 08:20:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.655 ************************************ 00:07:24.655 START TEST rpc_integrity 00:07:24.655 ************************************ 00:07:24.655 08:20:36 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:24.655 08:20:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:24.655 08:20:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.655 08:20:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.655 08:20:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.655 08:20:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:24.655 08:20:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:24.655 { 00:07:24.655 "name": "Malloc0", 00:07:24.655 "aliases": [ 00:07:24.655 "3a67757a-f6b8-4820-8ddd-56c70184347a" 00:07:24.655 ], 00:07:24.655 "product_name": "Malloc disk", 00:07:24.655 "block_size": 512, 00:07:24.655 "num_blocks": 16384, 00:07:24.655 "uuid": "3a67757a-f6b8-4820-8ddd-56c70184347a", 00:07:24.655 "assigned_rate_limits": { 00:07:24.655 "rw_ios_per_sec": 0, 00:07:24.655 "rw_mbytes_per_sec": 0, 00:07:24.655 "r_mbytes_per_sec": 0, 00:07:24.655 "w_mbytes_per_sec": 0 00:07:24.655 }, 00:07:24.655 "claimed": false, 00:07:24.655 "zoned": false, 00:07:24.655 "supported_io_types": { 00:07:24.655 "read": true, 00:07:24.655 "write": true, 00:07:24.655 "unmap": true, 00:07:24.655 "flush": true, 00:07:24.655 "reset": true, 00:07:24.655 "nvme_admin": false, 00:07:24.655 "nvme_io": false, 00:07:24.655 "nvme_io_md": false, 00:07:24.655 "write_zeroes": true, 00:07:24.655 "zcopy": true, 00:07:24.655 "get_zone_info": false, 00:07:24.655 "zone_management": false, 00:07:24.655 "zone_append": false, 00:07:24.655 "compare": false, 00:07:24.655 "compare_and_write": false, 00:07:24.655 "abort": true, 00:07:24.655 "seek_hole": false, 00:07:24.655 "seek_data": false, 00:07:24.655 "copy": true, 00:07:24.655 "nvme_iov_md": false 00:07:24.655 }, 00:07:24.655 "memory_domains": [ 00:07:24.655 { 00:07:24.655 "dma_device_id": "system", 00:07:24.655 "dma_device_type": 1 00:07:24.655 }, 00:07:24.655 { 00:07:24.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.655 "dma_device_type": 2 00:07:24.655 } 00:07:24.655 ], 00:07:24.655 "driver_specific": {} 00:07:24.655 } 00:07:24.655 ]' 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:24.655 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.655 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.915 [2024-07-23 08:20:37.178020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:24.915 [2024-07-23 08:20:37.178196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.915 [2024-07-23 08:20:37.178291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023480 00:07:24.915 [2024-07-23 08:20:37.178383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.915 [2024-07-23 08:20:37.183128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.915 [2024-07-23 08:20:37.183224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:24.915 Passthru0 00:07:24.915 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.915 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:24.915 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.915 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.915 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.915 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:24.915 { 00:07:24.915 "name": "Malloc0", 00:07:24.915 "aliases": [ 00:07:24.915 "3a67757a-f6b8-4820-8ddd-56c70184347a" 00:07:24.915 ], 00:07:24.915 "product_name": "Malloc disk", 00:07:24.915 "block_size": 512, 00:07:24.915 "num_blocks": 16384, 00:07:24.915 "uuid": "3a67757a-f6b8-4820-8ddd-56c70184347a", 00:07:24.915 "assigned_rate_limits": { 00:07:24.915 "rw_ios_per_sec": 0, 00:07:24.915 "rw_mbytes_per_sec": 0, 00:07:24.915 "r_mbytes_per_sec": 0, 00:07:24.915 "w_mbytes_per_sec": 0 00:07:24.915 }, 00:07:24.915 "claimed": true, 00:07:24.915 "claim_type": "exclusive_write", 00:07:24.915 "zoned": false, 00:07:24.915 "supported_io_types": { 00:07:24.915 "read": true, 00:07:24.915 "write": true, 00:07:24.915 "unmap": true, 00:07:24.915 "flush": true, 00:07:24.915 "reset": true, 00:07:24.915 "nvme_admin": false, 00:07:24.915 "nvme_io": false, 00:07:24.915 "nvme_io_md": false, 00:07:24.915 "write_zeroes": true, 00:07:24.915 "zcopy": true, 00:07:24.915 "get_zone_info": false, 00:07:24.915 "zone_management": false, 00:07:24.915 "zone_append": false, 00:07:24.915 "compare": false, 00:07:24.915 "compare_and_write": false, 00:07:24.915 "abort": true, 00:07:24.915 "seek_hole": false, 00:07:24.915 "seek_data": false, 00:07:24.915 "copy": true, 00:07:24.915 "nvme_iov_md": false 00:07:24.915 }, 00:07:24.915 "memory_domains": [ 00:07:24.915 { 00:07:24.915 "dma_device_id": "system", 00:07:24.915 "dma_device_type": 1 00:07:24.915 }, 00:07:24.915 { 00:07:24.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.915 "dma_device_type": 2 00:07:24.915 } 00:07:24.915 ], 00:07:24.915 "driver_specific": {} 00:07:24.915 }, 00:07:24.915 { 00:07:24.915 "name": "Passthru0", 00:07:24.915 "aliases": [ 00:07:24.915 "277a8b05-b49e-5182-9454-dac1195596e4" 00:07:24.915 ], 00:07:24.915 "product_name": "passthru", 00:07:24.915 "block_size": 512, 00:07:24.915 "num_blocks": 16384, 00:07:24.915 "uuid": "277a8b05-b49e-5182-9454-dac1195596e4", 00:07:24.915 "assigned_rate_limits": { 00:07:24.915 "rw_ios_per_sec": 0, 00:07:24.915 "rw_mbytes_per_sec": 0, 00:07:24.915 "r_mbytes_per_sec": 0, 00:07:24.915 "w_mbytes_per_sec": 0 00:07:24.915 }, 00:07:24.915 "claimed": false, 00:07:24.915 "zoned": false, 00:07:24.915 "supported_io_types": { 00:07:24.915 "read": true, 00:07:24.915 "write": true, 00:07:24.915 "unmap": true, 00:07:24.915 "flush": true, 00:07:24.915 "reset": true, 00:07:24.916 "nvme_admin": false, 00:07:24.916 "nvme_io": false, 00:07:24.916 "nvme_io_md": false, 00:07:24.916 "write_zeroes": true, 00:07:24.916 "zcopy": true, 00:07:24.916 "get_zone_info": false, 00:07:24.916 "zone_management": false, 00:07:24.916 "zone_append": false, 00:07:24.916 "compare": false, 00:07:24.916 "compare_and_write": false, 00:07:24.916 "abort": true, 00:07:24.916 "seek_hole": false, 00:07:24.916 "seek_data": false, 00:07:24.916 "copy": true, 00:07:24.916 "nvme_iov_md": false 00:07:24.916 }, 00:07:24.916 "memory_domains": [ 00:07:24.916 { 00:07:24.916 "dma_device_id": "system", 00:07:24.916 "dma_device_type": 1 00:07:24.916 }, 00:07:24.916 { 00:07:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.916 "dma_device_type": 2 00:07:24.916 } 00:07:24.916 ], 00:07:24.916 "driver_specific": { 00:07:24.916 "passthru": { 00:07:24.916 "name": "Passthru0", 00:07:24.916 "base_bdev_name": "Malloc0" 00:07:24.916 } 00:07:24.916 } 00:07:24.916 } 00:07:24.916 ]' 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:24.916 08:20:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:24.916 00:07:24.916 real 0m0.449s 00:07:24.916 user 0m0.275s 00:07:24.916 sys 0m0.037s 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.916 08:20:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.916 ************************************ 00:07:24.916 END TEST rpc_integrity 00:07:24.916 ************************************ 00:07:25.175 08:20:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:25.175 08:20:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:25.175 08:20:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.175 08:20:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.175 08:20:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.175 ************************************ 00:07:25.175 START TEST rpc_plugins 00:07:25.175 ************************************ 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:25.176 { 00:07:25.176 "name": "Malloc1", 00:07:25.176 "aliases": [ 00:07:25.176 "8aeb6796-6b0a-405f-bcb6-4e48882384e4" 00:07:25.176 ], 00:07:25.176 "product_name": "Malloc disk", 00:07:25.176 "block_size": 4096, 00:07:25.176 "num_blocks": 256, 00:07:25.176 "uuid": "8aeb6796-6b0a-405f-bcb6-4e48882384e4", 00:07:25.176 "assigned_rate_limits": { 00:07:25.176 "rw_ios_per_sec": 0, 00:07:25.176 "rw_mbytes_per_sec": 0, 00:07:25.176 "r_mbytes_per_sec": 0, 00:07:25.176 "w_mbytes_per_sec": 0 00:07:25.176 }, 00:07:25.176 "claimed": false, 00:07:25.176 "zoned": false, 00:07:25.176 "supported_io_types": { 00:07:25.176 "read": true, 00:07:25.176 "write": true, 00:07:25.176 "unmap": true, 00:07:25.176 "flush": true, 00:07:25.176 "reset": true, 00:07:25.176 "nvme_admin": false, 00:07:25.176 "nvme_io": false, 00:07:25.176 "nvme_io_md": false, 00:07:25.176 "write_zeroes": true, 00:07:25.176 "zcopy": true, 00:07:25.176 "get_zone_info": false, 00:07:25.176 "zone_management": false, 00:07:25.176 "zone_append": false, 00:07:25.176 "compare": false, 00:07:25.176 "compare_and_write": false, 00:07:25.176 "abort": true, 00:07:25.176 "seek_hole": false, 00:07:25.176 "seek_data": false, 00:07:25.176 "copy": true, 00:07:25.176 "nvme_iov_md": false 00:07:25.176 }, 00:07:25.176 "memory_domains": [ 00:07:25.176 { 00:07:25.176 "dma_device_id": "system", 00:07:25.176 "dma_device_type": 1 00:07:25.176 }, 00:07:25.176 { 00:07:25.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.176 "dma_device_type": 2 00:07:25.176 } 00:07:25.176 ], 00:07:25.176 "driver_specific": {} 00:07:25.176 } 00:07:25.176 ]' 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:25.176 08:20:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:25.176 00:07:25.176 real 0m0.206s 00:07:25.176 user 0m0.140s 00:07:25.176 sys 0m0.013s 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.176 08:20:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 ************************************ 00:07:25.176 END TEST rpc_plugins 00:07:25.176 ************************************ 00:07:25.435 08:20:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:25.435 08:20:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:25.435 08:20:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.435 08:20:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.435 08:20:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.435 ************************************ 00:07:25.435 START TEST rpc_trace_cmd_test 00:07:25.435 ************************************ 00:07:25.435 08:20:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:07:25.435 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:25.435 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:25.435 08:20:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.435 08:20:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.435 08:20:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.435 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:25.435 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2180730", 00:07:25.435 "tpoint_group_mask": "0x8", 00:07:25.435 "iscsi_conn": { 00:07:25.435 "mask": "0x2", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "scsi": { 00:07:25.435 "mask": "0x4", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "bdev": { 00:07:25.435 "mask": "0x8", 00:07:25.435 "tpoint_mask": "0xffffffffffffffff" 00:07:25.435 }, 00:07:25.435 "nvmf_rdma": { 00:07:25.435 "mask": "0x10", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "nvmf_tcp": { 00:07:25.435 "mask": "0x20", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "ftl": { 00:07:25.435 "mask": "0x40", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "blobfs": { 00:07:25.435 "mask": "0x80", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "dsa": { 00:07:25.435 "mask": "0x200", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "thread": { 00:07:25.435 "mask": "0x400", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "nvme_pcie": { 00:07:25.435 "mask": "0x800", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "iaa": { 00:07:25.435 "mask": "0x1000", 00:07:25.435 "tpoint_mask": "0x0" 00:07:25.435 }, 00:07:25.435 "nvme_tcp": { 00:07:25.436 "mask": "0x2000", 00:07:25.436 "tpoint_mask": "0x0" 00:07:25.436 }, 00:07:25.436 "bdev_nvme": { 00:07:25.436 "mask": "0x4000", 00:07:25.436 "tpoint_mask": "0x0" 00:07:25.436 }, 00:07:25.436 "sock": { 00:07:25.436 "mask": "0x8000", 00:07:25.436 "tpoint_mask": "0x0" 00:07:25.436 } 00:07:25.436 }' 00:07:25.436 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:25.436 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:25.436 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:25.436 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:25.436 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:25.694 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:25.694 08:20:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:25.694 08:20:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:25.694 08:20:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:25.694 08:20:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:25.694 00:07:25.694 real 0m0.375s 00:07:25.694 user 0m0.339s 00:07:25.694 sys 0m0.028s 00:07:25.694 08:20:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.694 08:20:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.694 ************************************ 00:07:25.694 END TEST rpc_trace_cmd_test 00:07:25.694 ************************************ 00:07:25.694 08:20:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:25.694 08:20:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:25.694 08:20:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:25.694 08:20:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:25.694 08:20:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.694 08:20:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.694 08:20:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.694 ************************************ 00:07:25.694 START TEST rpc_daemon_integrity 00:07:25.694 ************************************ 00:07:25.694 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:25.694 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:25.694 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.694 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.694 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:25.954 { 00:07:25.954 "name": "Malloc2", 00:07:25.954 "aliases": [ 00:07:25.954 "51da7148-a8c6-4177-b41f-4ccd9857b5ec" 00:07:25.954 ], 00:07:25.954 "product_name": "Malloc disk", 00:07:25.954 "block_size": 512, 00:07:25.954 "num_blocks": 16384, 00:07:25.954 "uuid": "51da7148-a8c6-4177-b41f-4ccd9857b5ec", 00:07:25.954 "assigned_rate_limits": { 00:07:25.954 "rw_ios_per_sec": 0, 00:07:25.954 "rw_mbytes_per_sec": 0, 00:07:25.954 "r_mbytes_per_sec": 0, 00:07:25.954 "w_mbytes_per_sec": 0 00:07:25.954 }, 00:07:25.954 "claimed": false, 00:07:25.954 "zoned": false, 00:07:25.954 "supported_io_types": { 00:07:25.954 "read": true, 00:07:25.954 "write": true, 00:07:25.954 "unmap": true, 00:07:25.954 "flush": true, 00:07:25.954 "reset": true, 00:07:25.954 "nvme_admin": false, 00:07:25.954 "nvme_io": false, 00:07:25.954 "nvme_io_md": false, 00:07:25.954 "write_zeroes": true, 00:07:25.954 "zcopy": true, 00:07:25.954 "get_zone_info": false, 00:07:25.954 "zone_management": false, 00:07:25.954 "zone_append": false, 00:07:25.954 "compare": false, 00:07:25.954 "compare_and_write": false, 00:07:25.954 "abort": true, 00:07:25.954 "seek_hole": false, 00:07:25.954 "seek_data": false, 00:07:25.954 "copy": true, 00:07:25.954 "nvme_iov_md": false 00:07:25.954 }, 00:07:25.954 "memory_domains": [ 00:07:25.954 { 00:07:25.954 "dma_device_id": "system", 00:07:25.954 "dma_device_type": 1 00:07:25.954 }, 00:07:25.954 { 00:07:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.954 "dma_device_type": 2 00:07:25.954 } 00:07:25.954 ], 00:07:25.954 "driver_specific": {} 00:07:25.954 } 00:07:25.954 ]' 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 [2024-07-23 08:20:38.425434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:25.954 [2024-07-23 08:20:38.425527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.954 [2024-07-23 08:20:38.425574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000024680 00:07:25.954 [2024-07-23 08:20:38.425607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.954 [2024-07-23 08:20:38.430372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.954 [2024-07-23 08:20:38.430425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:25.954 Passthru0 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.954 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:25.954 { 00:07:25.954 "name": "Malloc2", 00:07:25.954 "aliases": [ 00:07:25.954 "51da7148-a8c6-4177-b41f-4ccd9857b5ec" 00:07:25.954 ], 00:07:25.954 "product_name": "Malloc disk", 00:07:25.954 "block_size": 512, 00:07:25.954 "num_blocks": 16384, 00:07:25.954 "uuid": "51da7148-a8c6-4177-b41f-4ccd9857b5ec", 00:07:25.954 "assigned_rate_limits": { 00:07:25.954 "rw_ios_per_sec": 0, 00:07:25.954 "rw_mbytes_per_sec": 0, 00:07:25.954 "r_mbytes_per_sec": 0, 00:07:25.954 "w_mbytes_per_sec": 0 00:07:25.954 }, 00:07:25.954 "claimed": true, 00:07:25.954 "claim_type": "exclusive_write", 00:07:25.954 "zoned": false, 00:07:25.954 "supported_io_types": { 00:07:25.954 "read": true, 00:07:25.954 "write": true, 00:07:25.954 "unmap": true, 00:07:25.954 "flush": true, 00:07:25.954 "reset": true, 00:07:25.954 "nvme_admin": false, 00:07:25.954 "nvme_io": false, 00:07:25.954 "nvme_io_md": false, 00:07:25.954 "write_zeroes": true, 00:07:25.954 "zcopy": true, 00:07:25.954 "get_zone_info": false, 00:07:25.954 "zone_management": false, 00:07:25.954 "zone_append": false, 00:07:25.954 "compare": false, 00:07:25.954 "compare_and_write": false, 00:07:25.954 "abort": true, 00:07:25.954 "seek_hole": false, 00:07:25.954 "seek_data": false, 00:07:25.954 "copy": true, 00:07:25.954 "nvme_iov_md": false 00:07:25.954 }, 00:07:25.954 "memory_domains": [ 00:07:25.954 { 00:07:25.954 "dma_device_id": "system", 00:07:25.954 "dma_device_type": 1 00:07:25.954 }, 00:07:25.954 { 00:07:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.955 "dma_device_type": 2 00:07:25.955 } 00:07:25.955 ], 00:07:25.955 "driver_specific": {} 00:07:25.955 }, 00:07:25.955 { 00:07:25.955 "name": "Passthru0", 00:07:25.955 "aliases": [ 00:07:25.955 "f361a308-c5e2-5694-8992-1dc14255c981" 00:07:25.955 ], 00:07:25.955 "product_name": "passthru", 00:07:25.955 "block_size": 512, 00:07:25.955 "num_blocks": 16384, 00:07:25.955 "uuid": "f361a308-c5e2-5694-8992-1dc14255c981", 00:07:25.955 "assigned_rate_limits": { 00:07:25.955 "rw_ios_per_sec": 0, 00:07:25.955 "rw_mbytes_per_sec": 0, 00:07:25.955 "r_mbytes_per_sec": 0, 00:07:25.955 "w_mbytes_per_sec": 0 00:07:25.955 }, 00:07:25.955 "claimed": false, 00:07:25.955 "zoned": false, 00:07:25.955 "supported_io_types": { 00:07:25.955 "read": true, 00:07:25.955 "write": true, 00:07:25.955 "unmap": true, 00:07:25.955 "flush": true, 00:07:25.955 "reset": true, 00:07:25.955 "nvme_admin": false, 00:07:25.955 "nvme_io": false, 00:07:25.955 "nvme_io_md": false, 00:07:25.955 "write_zeroes": true, 00:07:25.955 "zcopy": true, 00:07:25.955 "get_zone_info": false, 00:07:25.955 "zone_management": false, 00:07:25.955 "zone_append": false, 00:07:25.955 "compare": false, 00:07:25.955 "compare_and_write": false, 00:07:25.955 "abort": true, 00:07:25.955 "seek_hole": false, 00:07:25.955 "seek_data": false, 00:07:25.955 "copy": true, 00:07:25.955 "nvme_iov_md": false 00:07:25.955 }, 00:07:25.955 "memory_domains": [ 00:07:25.955 { 00:07:25.955 "dma_device_id": "system", 00:07:25.955 "dma_device_type": 1 00:07:25.955 }, 00:07:25.955 { 00:07:25.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.955 "dma_device_type": 2 00:07:25.955 } 00:07:25.955 ], 00:07:25.955 "driver_specific": { 00:07:25.955 "passthru": { 00:07:25.955 "name": "Passthru0", 00:07:25.955 "base_bdev_name": "Malloc2" 00:07:25.955 } 00:07:25.955 } 00:07:25.955 } 00:07:25.955 ]' 00:07:25.955 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:26.215 00:07:26.215 real 0m0.491s 00:07:26.215 user 0m0.320s 00:07:26.215 sys 0m0.041s 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.215 08:20:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:26.215 ************************************ 00:07:26.215 END TEST rpc_daemon_integrity 00:07:26.215 ************************************ 00:07:26.215 08:20:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:26.215 08:20:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:26.215 08:20:38 rpc -- rpc/rpc.sh@84 -- # killprocess 2180730 00:07:26.215 08:20:38 rpc -- common/autotest_common.sh@948 -- # '[' -z 2180730 ']' 00:07:26.215 08:20:38 rpc -- common/autotest_common.sh@952 -- # kill -0 2180730 00:07:26.215 08:20:38 rpc -- common/autotest_common.sh@953 -- # uname 00:07:26.475 08:20:38 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.475 08:20:38 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2180730 00:07:26.475 08:20:38 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.475 08:20:38 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.475 08:20:38 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2180730' 00:07:26.475 killing process with pid 2180730 00:07:26.475 08:20:38 rpc -- common/autotest_common.sh@967 -- # kill 2180730 00:07:26.475 08:20:38 rpc -- common/autotest_common.sh@972 -- # wait 2180730 00:07:31.766 00:07:31.766 real 0m9.404s 00:07:31.766 user 0m10.661s 00:07:31.766 sys 0m1.417s 00:07:31.766 08:20:43 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.766 08:20:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.766 ************************************ 00:07:31.766 END TEST rpc 00:07:31.766 ************************************ 00:07:31.766 08:20:43 -- common/autotest_common.sh@1142 -- # return 0 00:07:31.766 08:20:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:31.766 08:20:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.766 08:20:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.766 08:20:43 -- common/autotest_common.sh@10 -- # set +x 00:07:31.766 ************************************ 00:07:31.766 START TEST skip_rpc 00:07:31.766 ************************************ 00:07:31.766 08:20:43 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:31.766 * Looking for test storage... 00:07:31.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:31.766 08:20:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:31.766 08:20:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:31.766 08:20:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:31.766 08:20:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.766 08:20:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.766 08:20:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.766 ************************************ 00:07:31.766 START TEST skip_rpc 00:07:31.766 ************************************ 00:07:31.766 08:20:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:07:31.766 08:20:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2181968 00:07:31.766 08:20:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:31.766 08:20:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:31.766 08:20:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:31.766 [2024-07-23 08:20:43.979307] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:31.766 [2024-07-23 08:20:43.979597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181968 ] 00:07:31.766 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.766 [2024-07-23 08:20:44.230389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.335 [2024-07-23 08:20:44.730757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2181968 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2181968 ']' 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2181968 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2181968 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2181968' 00:07:36.526 killing process with pid 2181968 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2181968 00:07:36.526 08:20:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2181968 00:07:41.833 00:07:41.833 real 0m9.864s 00:07:41.833 user 0m9.029s 00:07:41.833 sys 0m0.799s 00:07:41.833 08:20:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.833 08:20:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.833 ************************************ 00:07:41.833 END TEST skip_rpc 00:07:41.833 ************************************ 00:07:41.833 08:20:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:41.833 08:20:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:41.833 08:20:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.833 08:20:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.833 08:20:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.833 ************************************ 00:07:41.833 START TEST skip_rpc_with_json 00:07:41.833 ************************************ 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2183063 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2183063 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2183063 ']' 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.833 08:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:41.833 [2024-07-23 08:20:53.975657] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:41.833 [2024-07-23 08:20:53.975992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183063 ] 00:07:41.833 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.833 [2024-07-23 08:20:54.282675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.402 [2024-07-23 08:20:54.757728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.308 [2024-07-23 08:20:56.471427] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:44.308 request: 00:07:44.308 { 00:07:44.308 "trtype": "tcp", 00:07:44.308 "method": "nvmf_get_transports", 00:07:44.308 "req_id": 1 00:07:44.308 } 00:07:44.308 Got JSON-RPC error response 00:07:44.308 response: 00:07:44.308 { 00:07:44.308 "code": -19, 00:07:44.308 "message": "No such device" 00:07:44.308 } 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.308 [2024-07-23 08:20:56.483598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.308 08:20:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:44.308 { 00:07:44.308 "subsystems": [ 00:07:44.308 { 00:07:44.308 "subsystem": "keyring", 00:07:44.308 "config": [] 00:07:44.308 }, 00:07:44.308 { 00:07:44.308 "subsystem": "iobuf", 00:07:44.308 "config": [ 00:07:44.308 { 00:07:44.308 "method": "iobuf_set_options", 00:07:44.308 "params": { 00:07:44.308 "small_pool_count": 8192, 00:07:44.308 "large_pool_count": 1024, 00:07:44.308 "small_bufsize": 8192, 00:07:44.308 "large_bufsize": 135168 00:07:44.308 } 00:07:44.308 } 00:07:44.308 ] 00:07:44.308 }, 00:07:44.308 { 00:07:44.308 "subsystem": "sock", 00:07:44.308 "config": [ 00:07:44.308 { 00:07:44.308 "method": "sock_set_default_impl", 00:07:44.308 "params": { 00:07:44.308 "impl_name": "posix" 00:07:44.308 } 00:07:44.308 }, 00:07:44.308 { 00:07:44.308 "method": "sock_impl_set_options", 00:07:44.309 "params": { 00:07:44.309 "impl_name": "ssl", 00:07:44.309 "recv_buf_size": 4096, 00:07:44.309 "send_buf_size": 4096, 00:07:44.309 "enable_recv_pipe": true, 00:07:44.309 "enable_quickack": false, 00:07:44.309 "enable_placement_id": 0, 00:07:44.309 "enable_zerocopy_send_server": true, 00:07:44.309 "enable_zerocopy_send_client": false, 00:07:44.309 "zerocopy_threshold": 0, 00:07:44.309 "tls_version": 0, 00:07:44.309 "enable_ktls": false 00:07:44.309 } 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "method": "sock_impl_set_options", 00:07:44.309 "params": { 00:07:44.309 "impl_name": "posix", 00:07:44.309 "recv_buf_size": 2097152, 00:07:44.309 "send_buf_size": 2097152, 00:07:44.309 "enable_recv_pipe": true, 00:07:44.309 "enable_quickack": false, 00:07:44.309 "enable_placement_id": 0, 00:07:44.309 "enable_zerocopy_send_server": true, 00:07:44.309 "enable_zerocopy_send_client": false, 00:07:44.309 "zerocopy_threshold": 0, 00:07:44.309 "tls_version": 0, 00:07:44.309 "enable_ktls": false 00:07:44.309 } 00:07:44.309 } 00:07:44.309 ] 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "subsystem": "vmd", 00:07:44.309 "config": [] 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "subsystem": "accel", 00:07:44.309 "config": [ 00:07:44.309 { 00:07:44.309 "method": "accel_set_options", 00:07:44.309 "params": { 00:07:44.309 "small_cache_size": 128, 00:07:44.309 "large_cache_size": 16, 00:07:44.309 "task_count": 2048, 00:07:44.309 "sequence_count": 2048, 00:07:44.309 "buf_count": 2048 00:07:44.309 } 00:07:44.309 } 00:07:44.309 ] 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "subsystem": "bdev", 00:07:44.309 "config": [ 00:07:44.309 { 00:07:44.309 "method": "bdev_set_options", 00:07:44.309 "params": { 00:07:44.309 "bdev_io_pool_size": 65535, 00:07:44.309 "bdev_io_cache_size": 256, 00:07:44.309 "bdev_auto_examine": true, 00:07:44.309 "iobuf_small_cache_size": 128, 00:07:44.309 "iobuf_large_cache_size": 16 00:07:44.309 } 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "method": "bdev_raid_set_options", 00:07:44.309 "params": { 00:07:44.309 "process_window_size_kb": 1024, 00:07:44.309 "process_max_bandwidth_mb_sec": 0 00:07:44.309 } 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "method": "bdev_iscsi_set_options", 00:07:44.309 "params": { 00:07:44.309 "timeout_sec": 30 00:07:44.309 } 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "method": "bdev_nvme_set_options", 00:07:44.309 "params": { 00:07:44.309 "action_on_timeout": "none", 00:07:44.309 "timeout_us": 0, 00:07:44.309 "timeout_admin_us": 0, 00:07:44.309 "keep_alive_timeout_ms": 10000, 00:07:44.309 "arbitration_burst": 0, 00:07:44.309 "low_priority_weight": 0, 00:07:44.309 "medium_priority_weight": 0, 00:07:44.309 "high_priority_weight": 0, 00:07:44.309 "nvme_adminq_poll_period_us": 10000, 00:07:44.309 "nvme_ioq_poll_period_us": 0, 00:07:44.309 "io_queue_requests": 0, 00:07:44.309 "delay_cmd_submit": true, 00:07:44.309 "transport_retry_count": 4, 00:07:44.309 "bdev_retry_count": 3, 00:07:44.309 "transport_ack_timeout": 0, 00:07:44.309 "ctrlr_loss_timeout_sec": 0, 00:07:44.309 "reconnect_delay_sec": 0, 00:07:44.309 "fast_io_fail_timeout_sec": 0, 00:07:44.309 "disable_auto_failback": false, 00:07:44.309 "generate_uuids": false, 00:07:44.309 "transport_tos": 0, 00:07:44.309 "nvme_error_stat": false, 00:07:44.309 "rdma_srq_size": 0, 00:07:44.309 "io_path_stat": false, 00:07:44.309 "allow_accel_sequence": false, 00:07:44.309 "rdma_max_cq_size": 0, 00:07:44.309 "rdma_cm_event_timeout_ms": 0, 00:07:44.309 "dhchap_digests": [ 00:07:44.309 "sha256", 00:07:44.309 "sha384", 00:07:44.309 "sha512" 00:07:44.309 ], 00:07:44.309 "dhchap_dhgroups": [ 00:07:44.309 "null", 00:07:44.309 "ffdhe2048", 00:07:44.309 "ffdhe3072", 00:07:44.309 "ffdhe4096", 00:07:44.309 "ffdhe6144", 00:07:44.309 "ffdhe8192" 00:07:44.309 ] 00:07:44.309 } 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "method": "bdev_nvme_set_hotplug", 00:07:44.309 "params": { 00:07:44.309 "period_us": 100000, 00:07:44.309 "enable": false 00:07:44.309 } 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "method": "bdev_wait_for_examine" 00:07:44.309 } 00:07:44.309 ] 00:07:44.309 }, 00:07:44.309 { 00:07:44.309 "subsystem": "scsi", 00:07:44.309 "config": null 00:07:44.309 }, 00:07:44.310 { 00:07:44.310 "subsystem": "scheduler", 00:07:44.310 "config": [ 00:07:44.310 { 00:07:44.310 "method": "framework_set_scheduler", 00:07:44.310 "params": { 00:07:44.310 "name": "static" 00:07:44.310 } 00:07:44.310 } 00:07:44.310 ] 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "subsystem": "vhost_scsi", 00:07:44.310 "config": [] 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "subsystem": "vhost_blk", 00:07:44.310 "config": [] 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "subsystem": "ublk", 00:07:44.310 "config": [] 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "subsystem": "nbd", 00:07:44.310 "config": [] 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "subsystem": "nvmf", 00:07:44.310 "config": [ 00:07:44.310 { 00:07:44.310 "method": "nvmf_set_config", 00:07:44.310 "params": { 00:07:44.310 "discovery_filter": "match_any", 00:07:44.310 "admin_cmd_passthru": { 00:07:44.310 "identify_ctrlr": false 00:07:44.310 } 00:07:44.310 } 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "method": "nvmf_set_max_subsystems", 00:07:44.310 "params": { 00:07:44.310 "max_subsystems": 1024 00:07:44.310 } 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "method": "nvmf_set_crdt", 00:07:44.310 "params": { 00:07:44.310 "crdt1": 0, 00:07:44.310 "crdt2": 0, 00:07:44.310 "crdt3": 0 00:07:44.310 } 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "method": "nvmf_create_transport", 00:07:44.310 "params": { 00:07:44.310 "trtype": "TCP", 00:07:44.310 "max_queue_depth": 128, 00:07:44.310 "max_io_qpairs_per_ctrlr": 127, 00:07:44.310 "in_capsule_data_size": 4096, 00:07:44.310 "max_io_size": 131072, 00:07:44.310 "io_unit_size": 131072, 00:07:44.310 "max_aq_depth": 128, 00:07:44.310 "num_shared_buffers": 511, 00:07:44.310 "buf_cache_size": 4294967295, 00:07:44.310 "dif_insert_or_strip": false, 00:07:44.310 "zcopy": false, 00:07:44.310 "c2h_success": true, 00:07:44.310 "sock_priority": 0, 00:07:44.310 "abort_timeout_sec": 1, 00:07:44.310 "ack_timeout": 0, 00:07:44.310 "data_wr_pool_size": 0 00:07:44.310 } 00:07:44.310 } 00:07:44.310 ] 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "subsystem": "iscsi", 00:07:44.310 "config": [ 00:07:44.310 { 00:07:44.310 "method": "iscsi_set_options", 00:07:44.310 "params": { 00:07:44.310 "node_base": "iqn.2016-06.io.spdk", 00:07:44.310 "max_sessions": 128, 00:07:44.310 "max_connections_per_session": 2, 00:07:44.310 "max_queue_depth": 64, 00:07:44.310 "default_time2wait": 2, 00:07:44.310 "default_time2retain": 20, 00:07:44.310 "first_burst_length": 8192, 00:07:44.310 "immediate_data": true, 00:07:44.310 "allow_duplicated_isid": false, 00:07:44.310 "error_recovery_level": 0, 00:07:44.310 "nop_timeout": 60, 00:07:44.310 "nop_in_interval": 30, 00:07:44.310 "disable_chap": false, 00:07:44.310 "require_chap": false, 00:07:44.310 "mutual_chap": false, 00:07:44.310 "chap_group": 0, 00:07:44.310 "max_large_datain_per_connection": 64, 00:07:44.310 "max_r2t_per_connection": 4, 00:07:44.310 "pdu_pool_size": 36864, 00:07:44.310 "immediate_data_pool_size": 16384, 00:07:44.310 "data_out_pool_size": 2048 00:07:44.310 } 00:07:44.310 } 00:07:44.310 ] 00:07:44.310 } 00:07:44.310 ] 00:07:44.310 } 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2183063 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2183063 ']' 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2183063 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2183063 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2183063' 00:07:44.310 killing process with pid 2183063 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2183063 00:07:44.310 08:20:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2183063 00:07:49.590 08:21:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2183927 00:07:49.590 08:21:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:49.590 08:21:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2183927 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2183927 ']' 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2183927 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2183927 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2183927' 00:07:54.866 killing process with pid 2183927 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2183927 00:07:54.866 08:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2183927 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:59.066 00:07:59.066 real 0m17.385s 00:07:59.066 user 0m16.655s 00:07:59.066 sys 0m1.977s 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:59.066 ************************************ 00:07:59.066 END TEST skip_rpc_with_json 00:07:59.066 ************************************ 00:07:59.066 08:21:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:59.066 08:21:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:59.066 08:21:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.066 08:21:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.066 08:21:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.066 ************************************ 00:07:59.066 START TEST skip_rpc_with_delay 00:07:59.066 ************************************ 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:59.066 [2024-07-23 08:21:11.436765] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:59.066 [2024-07-23 08:21:11.437088] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:59.066 00:07:59.066 real 0m0.329s 00:07:59.066 user 0m0.189s 00:07:59.066 sys 0m0.136s 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.066 08:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:59.066 ************************************ 00:07:59.066 END TEST skip_rpc_with_delay 00:07:59.066 ************************************ 00:07:59.066 08:21:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:59.066 08:21:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:59.325 08:21:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:59.325 08:21:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:59.325 08:21:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:59.325 08:21:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.325 08:21:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.325 ************************************ 00:07:59.325 START TEST exit_on_failed_rpc_init 00:07:59.325 ************************************ 00:07:59.325 08:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:07:59.325 08:21:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2185724 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2185724 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2185724 ']' 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.326 08:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:59.584 [2024-07-23 08:21:11.848404] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:59.584 [2024-07-23 08:21:11.848626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185724 ] 00:07:59.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.843 [2024-07-23 08:21:12.145572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.411 [2024-07-23 08:21:12.640900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:02.953 08:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:02.953 [2024-07-23 08:21:15.307897] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:02.953 [2024-07-23 08:21:15.308207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186135 ] 00:08:02.953 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.212 [2024-07-23 08:21:15.540434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.477 [2024-07-23 08:21:15.852804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.477 [2024-07-23 08:21:15.852995] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:03.477 [2024-07-23 08:21:15.853054] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:03.477 [2024-07-23 08:21:15.853108] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2185724 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2185724 ']' 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2185724 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2185724 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2185724' 00:08:04.088 killing process with pid 2185724 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2185724 00:08:04.088 08:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2185724 00:08:09.380 00:08:09.380 real 0m9.611s 00:08:09.380 user 0m10.925s 00:08:09.380 sys 0m1.406s 00:08:09.380 08:21:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.380 08:21:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:09.380 ************************************ 00:08:09.380 END TEST exit_on_failed_rpc_init 00:08:09.380 ************************************ 00:08:09.380 08:21:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:09.380 08:21:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:09.380 00:08:09.380 real 0m37.616s 00:08:09.380 user 0m36.949s 00:08:09.380 sys 0m4.616s 00:08:09.380 08:21:21 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.380 08:21:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.380 ************************************ 00:08:09.381 END TEST skip_rpc 00:08:09.381 ************************************ 00:08:09.381 08:21:21 -- common/autotest_common.sh@1142 -- # return 0 00:08:09.381 08:21:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:09.381 08:21:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.381 08:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.381 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:08:09.381 ************************************ 00:08:09.381 START TEST rpc_client 00:08:09.381 ************************************ 00:08:09.381 08:21:21 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:09.381 * Looking for test storage... 00:08:09.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:09.381 08:21:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:09.381 OK 00:08:09.381 08:21:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:09.381 00:08:09.381 real 0m0.181s 00:08:09.381 user 0m0.079s 00:08:09.381 sys 0m0.110s 00:08:09.381 08:21:21 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.381 08:21:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:09.381 ************************************ 00:08:09.381 END TEST rpc_client 00:08:09.381 ************************************ 00:08:09.381 08:21:21 -- common/autotest_common.sh@1142 -- # return 0 00:08:09.381 08:21:21 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:09.381 08:21:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.381 08:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.381 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:08:09.381 ************************************ 00:08:09.381 START TEST json_config 00:08:09.381 ************************************ 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.381 08:21:21 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.381 08:21:21 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.381 08:21:21 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.381 08:21:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.381 08:21:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.381 08:21:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.381 08:21:21 json_config -- paths/export.sh@5 -- # export PATH 00:08:09.381 08:21:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@47 -- # : 0 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.381 08:21:21 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:08:09.381 INFO: JSON configuration test init 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.381 08:21:21 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:08:09.381 08:21:21 json_config -- json_config/common.sh@9 -- # local app=target 00:08:09.381 08:21:21 json_config -- json_config/common.sh@10 -- # shift 00:08:09.381 08:21:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:09.381 08:21:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:09.381 08:21:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:09.381 08:21:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:09.381 08:21:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:09.381 08:21:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2186916 00:08:09.381 08:21:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:09.381 08:21:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:09.381 Waiting for target to run... 00:08:09.381 08:21:21 json_config -- json_config/common.sh@25 -- # waitforlisten 2186916 /var/tmp/spdk_tgt.sock 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@829 -- # '[' -z 2186916 ']' 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:09.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.381 08:21:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.381 [2024-07-23 08:21:21.869272] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:09.381 [2024-07-23 08:21:21.869511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186916 ] 00:08:09.640 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.210 [2024-07-23 08:21:22.457082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.470 [2024-07-23 08:21:22.910412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.730 08:21:23 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.730 08:21:23 json_config -- common/autotest_common.sh@862 -- # return 0 00:08:10.730 08:21:23 json_config -- json_config/common.sh@26 -- # echo '' 00:08:10.730 00:08:10.730 08:21:23 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:08:10.730 08:21:23 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:08:10.730 08:21:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.730 08:21:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:10.730 08:21:23 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:08:10.730 08:21:23 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:08:10.730 08:21:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.730 08:21:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:10.730 08:21:23 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:10.730 08:21:23 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:08:10.730 08:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:16.001 08:21:28 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:08:16.001 08:21:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:16.001 08:21:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.001 08:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:16.001 08:21:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:16.001 08:21:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:16.001 08:21:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:16.001 08:21:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:08:16.001 08:21:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:16.001 08:21:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@51 -- # sort 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:08:16.260 08:21:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.260 08:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@59 -- # return 0 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:08:16.260 08:21:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.260 08:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:08:16.260 08:21:28 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:16.260 08:21:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:16.826 MallocForNvmf0 00:08:16.826 08:21:29 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:16.826 08:21:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:17.392 MallocForNvmf1 00:08:17.393 08:21:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:17.393 08:21:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:17.961 [2024-07-23 08:21:30.409306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.961 08:21:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.961 08:21:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.530 08:21:30 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:18.530 08:21:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:19.097 08:21:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:19.097 08:21:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:20.036 08:21:32 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:20.036 08:21:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:20.296 [2024-07-23 08:21:32.779823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:20.296 08:21:32 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:08:20.296 08:21:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.296 08:21:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.555 08:21:32 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:08:20.555 08:21:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.555 08:21:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.555 08:21:32 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:08:20.555 08:21:32 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:20.555 08:21:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:21.141 MallocBdevForConfigChangeCheck 00:08:21.141 08:21:33 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:08:21.141 08:21:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.141 08:21:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.141 08:21:33 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:08:21.141 08:21:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:22.080 08:21:34 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:08:22.080 INFO: shutting down applications... 00:08:22.080 08:21:34 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:08:22.080 08:21:34 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:08:22.080 08:21:34 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:08:22.080 08:21:34 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:23.456 Calling clear_iscsi_subsystem 00:08:23.456 Calling clear_nvmf_subsystem 00:08:23.456 Calling clear_nbd_subsystem 00:08:23.456 Calling clear_ublk_subsystem 00:08:23.456 Calling clear_vhost_blk_subsystem 00:08:23.456 Calling clear_vhost_scsi_subsystem 00:08:23.456 Calling clear_bdev_subsystem 00:08:23.715 08:21:35 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:23.715 08:21:35 json_config -- json_config/json_config.sh@347 -- # count=100 00:08:23.715 08:21:35 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:08:23.715 08:21:35 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:23.716 08:21:35 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:23.716 08:21:35 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:24.287 08:21:36 json_config -- json_config/json_config.sh@349 -- # break 00:08:24.287 08:21:36 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:08:24.287 08:21:36 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:08:24.287 08:21:36 json_config -- json_config/common.sh@31 -- # local app=target 00:08:24.287 08:21:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:24.287 08:21:36 json_config -- json_config/common.sh@35 -- # [[ -n 2186916 ]] 00:08:24.287 08:21:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2186916 00:08:24.287 08:21:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:24.287 08:21:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:24.287 08:21:36 json_config -- json_config/common.sh@41 -- # kill -0 2186916 00:08:24.287 08:21:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:24.889 08:21:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:24.889 08:21:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:24.889 08:21:37 json_config -- json_config/common.sh@41 -- # kill -0 2186916 00:08:24.889 08:21:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:25.459 08:21:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:25.459 08:21:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:25.459 08:21:37 json_config -- json_config/common.sh@41 -- # kill -0 2186916 00:08:25.459 08:21:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:25.719 08:21:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:25.719 08:21:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:25.719 08:21:38 json_config -- json_config/common.sh@41 -- # kill -0 2186916 00:08:25.719 08:21:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:26.289 08:21:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:26.289 08:21:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:26.289 08:21:38 json_config -- json_config/common.sh@41 -- # kill -0 2186916 00:08:26.289 08:21:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:26.289 08:21:38 json_config -- json_config/common.sh@43 -- # break 00:08:26.289 08:21:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:26.289 08:21:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:26.289 SPDK target shutdown done 00:08:26.289 08:21:38 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:08:26.289 INFO: relaunching applications... 00:08:26.289 08:21:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:26.289 08:21:38 json_config -- json_config/common.sh@9 -- # local app=target 00:08:26.289 08:21:38 json_config -- json_config/common.sh@10 -- # shift 00:08:26.289 08:21:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:26.289 08:21:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:26.289 08:21:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:26.289 08:21:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.289 08:21:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.289 08:21:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2188910 00:08:26.289 08:21:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:26.289 08:21:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:26.289 Waiting for target to run... 00:08:26.289 08:21:38 json_config -- json_config/common.sh@25 -- # waitforlisten 2188910 /var/tmp/spdk_tgt.sock 00:08:26.289 08:21:38 json_config -- common/autotest_common.sh@829 -- # '[' -z 2188910 ']' 00:08:26.289 08:21:38 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:26.289 08:21:38 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.289 08:21:38 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:26.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:26.289 08:21:38 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.289 08:21:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:26.549 [2024-07-23 08:21:38.829131] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:26.549 [2024-07-23 08:21:38.829327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188910 ] 00:08:26.549 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.118 [2024-07-23 08:21:39.562970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.687 [2024-07-23 08:21:40.039981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.964 [2024-07-23 08:21:44.503094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.964 [2024-07-23 08:21:44.536998] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:32.964 08:21:45 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.964 08:21:45 json_config -- common/autotest_common.sh@862 -- # return 0 00:08:32.964 08:21:45 json_config -- json_config/common.sh@26 -- # echo '' 00:08:32.964 00:08:32.964 08:21:45 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:08:32.964 08:21:45 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:32.964 INFO: Checking if target configuration is the same... 00:08:32.964 08:21:45 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:32.964 08:21:45 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:08:32.964 08:21:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:32.964 + '[' 2 -ne 2 ']' 00:08:32.964 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:32.964 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:32.964 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:32.964 +++ basename /dev/fd/62 00:08:32.964 ++ mktemp /tmp/62.XXX 00:08:32.964 + tmp_file_1=/tmp/62.GHw 00:08:32.964 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:32.964 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:32.964 + tmp_file_2=/tmp/spdk_tgt_config.json.e56 00:08:32.964 + ret=0 00:08:32.964 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:33.901 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:33.901 + diff -u /tmp/62.GHw /tmp/spdk_tgt_config.json.e56 00:08:33.901 + echo 'INFO: JSON config files are the same' 00:08:33.901 INFO: JSON config files are the same 00:08:33.901 + rm /tmp/62.GHw /tmp/spdk_tgt_config.json.e56 00:08:33.901 + exit 0 00:08:33.901 08:21:46 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:08:33.901 08:21:46 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:33.901 INFO: changing configuration and checking if this can be detected... 00:08:33.901 08:21:46 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:33.901 08:21:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:34.468 08:21:46 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:34.468 08:21:46 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:08:34.468 08:21:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:34.468 + '[' 2 -ne 2 ']' 00:08:34.468 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:34.468 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:34.468 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:34.468 +++ basename /dev/fd/62 00:08:34.468 ++ mktemp /tmp/62.XXX 00:08:34.468 + tmp_file_1=/tmp/62.7s6 00:08:34.468 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:34.468 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:34.468 + tmp_file_2=/tmp/spdk_tgt_config.json.LjM 00:08:34.468 + ret=0 00:08:34.468 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:35.036 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:35.036 + diff -u /tmp/62.7s6 /tmp/spdk_tgt_config.json.LjM 00:08:35.036 + ret=1 00:08:35.036 + echo '=== Start of file: /tmp/62.7s6 ===' 00:08:35.036 + cat /tmp/62.7s6 00:08:35.036 + echo '=== End of file: /tmp/62.7s6 ===' 00:08:35.036 + echo '' 00:08:35.036 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LjM ===' 00:08:35.036 + cat /tmp/spdk_tgt_config.json.LjM 00:08:35.036 + echo '=== End of file: /tmp/spdk_tgt_config.json.LjM ===' 00:08:35.036 + echo '' 00:08:35.036 + rm /tmp/62.7s6 /tmp/spdk_tgt_config.json.LjM 00:08:35.036 + exit 1 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:08:35.036 INFO: configuration change detected. 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@321 -- # [[ -n 2188910 ]] 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@197 -- # uname -s 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:35.036 08:21:47 json_config -- json_config/json_config.sh@327 -- # killprocess 2188910 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 2188910 ']' 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@952 -- # kill -0 2188910 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@953 -- # uname 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2188910 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2188910' 00:08:35.036 killing process with pid 2188910 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@967 -- # kill 2188910 00:08:35.036 08:21:47 json_config -- common/autotest_common.sh@972 -- # wait 2188910 00:08:39.228 08:21:50 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:39.228 08:21:50 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:08:39.228 08:21:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.228 08:21:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:39.228 08:21:50 json_config -- json_config/json_config.sh@332 -- # return 0 00:08:39.228 08:21:50 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:08:39.228 INFO: Success 00:08:39.228 00:08:39.228 real 0m29.341s 00:08:39.228 user 0m36.627s 00:08:39.228 sys 0m4.081s 00:08:39.228 08:21:50 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.228 08:21:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:39.228 ************************************ 00:08:39.228 END TEST json_config 00:08:39.228 ************************************ 00:08:39.228 08:21:50 -- common/autotest_common.sh@1142 -- # return 0 00:08:39.228 08:21:50 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:39.228 08:21:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.228 08:21:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.228 08:21:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.228 ************************************ 00:08:39.228 START TEST json_config_extra_key 00:08:39.228 ************************************ 00:08:39.228 08:21:51 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:39.228 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.228 08:21:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:39.228 08:21:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.228 08:21:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.228 08:21:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.228 08:21:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.228 08:21:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.229 08:21:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.229 08:21:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.229 08:21:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.229 08:21:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.229 08:21:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.229 08:21:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.229 08:21:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:39.229 08:21:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.229 08:21:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:39.229 INFO: launching applications... 00:08:39.229 08:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2190472 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:39.229 Waiting for target to run... 00:08:39.229 08:21:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2190472 /var/tmp/spdk_tgt.sock 00:08:39.229 08:21:51 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2190472 ']' 00:08:39.229 08:21:51 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:39.229 08:21:51 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.229 08:21:51 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:39.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:39.229 08:21:51 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.229 08:21:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:39.229 [2024-07-23 08:21:51.347204] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:39.229 [2024-07-23 08:21:51.347465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190472 ] 00:08:39.229 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.797 [2024-07-23 08:21:52.047687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.055 [2024-07-23 08:21:52.409565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.434 08:21:53 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.434 08:21:53 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:41.434 00:08:41.434 08:21:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:41.434 INFO: shutting down applications... 00:08:41.434 08:21:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2190472 ]] 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2190472 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:41.434 08:21:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:41.694 08:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:41.694 08:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:41.694 08:21:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:41.694 08:21:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:42.289 08:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:42.289 08:21:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:42.289 08:21:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:42.289 08:21:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:42.857 08:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:42.857 08:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:42.857 08:21:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:42.857 08:21:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:43.425 08:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:43.425 08:21:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:43.425 08:21:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:43.425 08:21:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:43.684 08:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:43.684 08:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:43.684 08:21:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:43.684 08:21:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:44.252 08:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:44.252 08:21:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:44.252 08:21:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:44.252 08:21:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:44.857 08:21:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:44.857 08:21:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:44.857 08:21:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:44.857 08:21:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:45.428 08:21:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:45.428 08:21:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:45.428 08:21:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:45.428 08:21:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:45.688 08:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:45.688 08:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:45.688 08:21:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:45.688 08:21:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:46.256 08:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:46.256 08:21:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:46.256 08:21:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:46.256 08:21:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:46.827 08:21:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:46.827 08:21:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:46.827 08:21:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2190472 00:08:46.827 08:21:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:46.827 08:21:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:46.827 08:21:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:46.827 08:21:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:46.827 SPDK target shutdown done 00:08:46.827 08:21:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:46.827 Success 00:08:46.827 00:08:46.827 real 0m8.158s 00:08:46.827 user 0m7.991s 00:08:46.827 sys 0m1.072s 00:08:46.827 08:21:59 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.827 08:21:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:46.827 ************************************ 00:08:46.827 END TEST json_config_extra_key 00:08:46.827 ************************************ 00:08:46.827 08:21:59 -- common/autotest_common.sh@1142 -- # return 0 00:08:46.827 08:21:59 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:46.827 08:21:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:46.827 08:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.827 08:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.827 ************************************ 00:08:46.827 START TEST alias_rpc 00:08:46.827 ************************************ 00:08:46.827 08:21:59 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:46.827 * Looking for test storage... 00:08:47.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:47.087 08:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:47.087 08:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2191363 00:08:47.087 08:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:47.087 08:21:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2191363 00:08:47.087 08:21:59 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2191363 ']' 00:08:47.087 08:21:59 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.087 08:21:59 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.087 08:21:59 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.087 08:21:59 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.087 08:21:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.087 [2024-07-23 08:21:59.459702] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:47.087 [2024-07-23 08:21:59.459894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191363 ] 00:08:47.087 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.347 [2024-07-23 08:21:59.658415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.917 [2024-07-23 08:22:00.139617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.827 08:22:01 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.827 08:22:01 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:49.827 08:22:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:50.088 08:22:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2191363 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2191363 ']' 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2191363 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2191363 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2191363' 00:08:50.088 killing process with pid 2191363 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@967 -- # kill 2191363 00:08:50.088 08:22:02 alias_rpc -- common/autotest_common.sh@972 -- # wait 2191363 00:08:55.372 00:08:55.372 real 0m7.974s 00:08:55.372 user 0m8.595s 00:08:55.372 sys 0m1.052s 00:08:55.372 08:22:07 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.372 08:22:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 ************************************ 00:08:55.372 END TEST alias_rpc 00:08:55.372 ************************************ 00:08:55.372 08:22:07 -- common/autotest_common.sh@1142 -- # return 0 00:08:55.372 08:22:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:08:55.372 08:22:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:55.372 08:22:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:55.372 08:22:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.372 08:22:07 -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 ************************************ 00:08:55.372 START TEST spdkcli_tcp 00:08:55.372 ************************************ 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:55.372 * Looking for test storage... 00:08:55.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2192319 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:55.372 08:22:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2192319 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2192319 ']' 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.372 08:22:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 [2024-07-23 08:22:07.645051] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:55.372 [2024-07-23 08:22:07.645413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192319 ] 00:08:55.372 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.632 [2024-07-23 08:22:07.952955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:56.200 [2024-07-23 08:22:08.463447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.200 [2024-07-23 08:22:08.463454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.135 08:22:09 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.135 08:22:09 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:08:57.135 08:22:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2192584 00:08:57.135 08:22:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:57.135 08:22:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:57.394 [ 00:08:57.394 "bdev_malloc_delete", 00:08:57.394 "bdev_malloc_create", 00:08:57.394 "bdev_null_resize", 00:08:57.394 "bdev_null_delete", 00:08:57.394 "bdev_null_create", 00:08:57.394 "bdev_nvme_cuse_unregister", 00:08:57.394 "bdev_nvme_cuse_register", 00:08:57.394 "bdev_opal_new_user", 00:08:57.394 "bdev_opal_set_lock_state", 00:08:57.394 "bdev_opal_delete", 00:08:57.394 "bdev_opal_get_info", 00:08:57.394 "bdev_opal_create", 00:08:57.394 "bdev_nvme_opal_revert", 00:08:57.394 "bdev_nvme_opal_init", 00:08:57.394 "bdev_nvme_send_cmd", 00:08:57.394 "bdev_nvme_get_path_iostat", 00:08:57.394 "bdev_nvme_get_mdns_discovery_info", 00:08:57.394 "bdev_nvme_stop_mdns_discovery", 00:08:57.394 "bdev_nvme_start_mdns_discovery", 00:08:57.394 "bdev_nvme_set_multipath_policy", 00:08:57.394 "bdev_nvme_set_preferred_path", 00:08:57.394 "bdev_nvme_get_io_paths", 00:08:57.394 "bdev_nvme_remove_error_injection", 00:08:57.394 "bdev_nvme_add_error_injection", 00:08:57.394 "bdev_nvme_get_discovery_info", 00:08:57.394 "bdev_nvme_stop_discovery", 00:08:57.394 "bdev_nvme_start_discovery", 00:08:57.394 "bdev_nvme_get_controller_health_info", 00:08:57.394 "bdev_nvme_disable_controller", 00:08:57.394 "bdev_nvme_enable_controller", 00:08:57.394 "bdev_nvme_reset_controller", 00:08:57.394 "bdev_nvme_get_transport_statistics", 00:08:57.394 "bdev_nvme_apply_firmware", 00:08:57.394 "bdev_nvme_detach_controller", 00:08:57.394 "bdev_nvme_get_controllers", 00:08:57.394 "bdev_nvme_attach_controller", 00:08:57.394 "bdev_nvme_set_hotplug", 00:08:57.394 "bdev_nvme_set_options", 00:08:57.394 "bdev_passthru_delete", 00:08:57.394 "bdev_passthru_create", 00:08:57.394 "bdev_lvol_set_parent_bdev", 00:08:57.394 "bdev_lvol_set_parent", 00:08:57.394 "bdev_lvol_check_shallow_copy", 00:08:57.394 "bdev_lvol_start_shallow_copy", 00:08:57.394 "bdev_lvol_grow_lvstore", 00:08:57.394 "bdev_lvol_get_lvols", 00:08:57.394 "bdev_lvol_get_lvstores", 00:08:57.394 "bdev_lvol_delete", 00:08:57.394 "bdev_lvol_set_read_only", 00:08:57.394 "bdev_lvol_resize", 00:08:57.394 "bdev_lvol_decouple_parent", 00:08:57.394 "bdev_lvol_inflate", 00:08:57.394 "bdev_lvol_rename", 00:08:57.394 "bdev_lvol_clone_bdev", 00:08:57.394 "bdev_lvol_clone", 00:08:57.394 "bdev_lvol_snapshot", 00:08:57.394 "bdev_lvol_create", 00:08:57.394 "bdev_lvol_delete_lvstore", 00:08:57.394 "bdev_lvol_rename_lvstore", 00:08:57.394 "bdev_lvol_create_lvstore", 00:08:57.394 "bdev_raid_set_options", 00:08:57.394 "bdev_raid_remove_base_bdev", 00:08:57.394 "bdev_raid_add_base_bdev", 00:08:57.394 "bdev_raid_delete", 00:08:57.394 "bdev_raid_create", 00:08:57.394 "bdev_raid_get_bdevs", 00:08:57.394 "bdev_error_inject_error", 00:08:57.394 "bdev_error_delete", 00:08:57.394 "bdev_error_create", 00:08:57.394 "bdev_split_delete", 00:08:57.394 "bdev_split_create", 00:08:57.394 "bdev_delay_delete", 00:08:57.394 "bdev_delay_create", 00:08:57.394 "bdev_delay_update_latency", 00:08:57.394 "bdev_zone_block_delete", 00:08:57.394 "bdev_zone_block_create", 00:08:57.394 "blobfs_create", 00:08:57.394 "blobfs_detect", 00:08:57.394 "blobfs_set_cache_size", 00:08:57.394 "bdev_aio_delete", 00:08:57.394 "bdev_aio_rescan", 00:08:57.394 "bdev_aio_create", 00:08:57.394 "bdev_ftl_set_property", 00:08:57.394 "bdev_ftl_get_properties", 00:08:57.394 "bdev_ftl_get_stats", 00:08:57.394 "bdev_ftl_unmap", 00:08:57.394 "bdev_ftl_unload", 00:08:57.394 "bdev_ftl_delete", 00:08:57.394 "bdev_ftl_load", 00:08:57.394 "bdev_ftl_create", 00:08:57.394 "bdev_virtio_attach_controller", 00:08:57.394 "bdev_virtio_scsi_get_devices", 00:08:57.394 "bdev_virtio_detach_controller", 00:08:57.394 "bdev_virtio_blk_set_hotplug", 00:08:57.394 "bdev_iscsi_delete", 00:08:57.394 "bdev_iscsi_create", 00:08:57.394 "bdev_iscsi_set_options", 00:08:57.394 "accel_error_inject_error", 00:08:57.394 "ioat_scan_accel_module", 00:08:57.394 "dsa_scan_accel_module", 00:08:57.394 "iaa_scan_accel_module", 00:08:57.394 "keyring_file_remove_key", 00:08:57.394 "keyring_file_add_key", 00:08:57.394 "keyring_linux_set_options", 00:08:57.394 "iscsi_get_histogram", 00:08:57.394 "iscsi_enable_histogram", 00:08:57.394 "iscsi_set_options", 00:08:57.394 "iscsi_get_auth_groups", 00:08:57.394 "iscsi_auth_group_remove_secret", 00:08:57.394 "iscsi_auth_group_add_secret", 00:08:57.394 "iscsi_delete_auth_group", 00:08:57.394 "iscsi_create_auth_group", 00:08:57.394 "iscsi_set_discovery_auth", 00:08:57.394 "iscsi_get_options", 00:08:57.394 "iscsi_target_node_request_logout", 00:08:57.394 "iscsi_target_node_set_redirect", 00:08:57.394 "iscsi_target_node_set_auth", 00:08:57.394 "iscsi_target_node_add_lun", 00:08:57.394 "iscsi_get_stats", 00:08:57.394 "iscsi_get_connections", 00:08:57.394 "iscsi_portal_group_set_auth", 00:08:57.394 "iscsi_start_portal_group", 00:08:57.394 "iscsi_delete_portal_group", 00:08:57.394 "iscsi_create_portal_group", 00:08:57.394 "iscsi_get_portal_groups", 00:08:57.394 "iscsi_delete_target_node", 00:08:57.394 "iscsi_target_node_remove_pg_ig_maps", 00:08:57.394 "iscsi_target_node_add_pg_ig_maps", 00:08:57.394 "iscsi_create_target_node", 00:08:57.394 "iscsi_get_target_nodes", 00:08:57.394 "iscsi_delete_initiator_group", 00:08:57.394 "iscsi_initiator_group_remove_initiators", 00:08:57.394 "iscsi_initiator_group_add_initiators", 00:08:57.394 "iscsi_create_initiator_group", 00:08:57.394 "iscsi_get_initiator_groups", 00:08:57.394 "nvmf_set_crdt", 00:08:57.394 "nvmf_set_config", 00:08:57.394 "nvmf_set_max_subsystems", 00:08:57.394 "nvmf_stop_mdns_prr", 00:08:57.394 "nvmf_publish_mdns_prr", 00:08:57.394 "nvmf_subsystem_get_listeners", 00:08:57.394 "nvmf_subsystem_get_qpairs", 00:08:57.394 "nvmf_subsystem_get_controllers", 00:08:57.394 "nvmf_get_stats", 00:08:57.394 "nvmf_get_transports", 00:08:57.394 "nvmf_create_transport", 00:08:57.394 "nvmf_get_targets", 00:08:57.394 "nvmf_delete_target", 00:08:57.394 "nvmf_create_target", 00:08:57.394 "nvmf_subsystem_allow_any_host", 00:08:57.394 "nvmf_subsystem_remove_host", 00:08:57.394 "nvmf_subsystem_add_host", 00:08:57.394 "nvmf_ns_remove_host", 00:08:57.394 "nvmf_ns_add_host", 00:08:57.394 "nvmf_subsystem_remove_ns", 00:08:57.394 "nvmf_subsystem_add_ns", 00:08:57.394 "nvmf_subsystem_listener_set_ana_state", 00:08:57.394 "nvmf_discovery_get_referrals", 00:08:57.394 "nvmf_discovery_remove_referral", 00:08:57.394 "nvmf_discovery_add_referral", 00:08:57.394 "nvmf_subsystem_remove_listener", 00:08:57.394 "nvmf_subsystem_add_listener", 00:08:57.394 "nvmf_delete_subsystem", 00:08:57.394 "nvmf_create_subsystem", 00:08:57.394 "nvmf_get_subsystems", 00:08:57.394 "env_dpdk_get_mem_stats", 00:08:57.394 "nbd_get_disks", 00:08:57.394 "nbd_stop_disk", 00:08:57.394 "nbd_start_disk", 00:08:57.394 "ublk_recover_disk", 00:08:57.394 "ublk_get_disks", 00:08:57.394 "ublk_stop_disk", 00:08:57.394 "ublk_start_disk", 00:08:57.394 "ublk_destroy_target", 00:08:57.394 "ublk_create_target", 00:08:57.394 "virtio_blk_create_transport", 00:08:57.394 "virtio_blk_get_transports", 00:08:57.394 "vhost_controller_set_coalescing", 00:08:57.394 "vhost_get_controllers", 00:08:57.394 "vhost_delete_controller", 00:08:57.394 "vhost_create_blk_controller", 00:08:57.394 "vhost_scsi_controller_remove_target", 00:08:57.394 "vhost_scsi_controller_add_target", 00:08:57.394 "vhost_start_scsi_controller", 00:08:57.394 "vhost_create_scsi_controller", 00:08:57.394 "thread_set_cpumask", 00:08:57.394 "framework_get_governor", 00:08:57.394 "framework_get_scheduler", 00:08:57.394 "framework_set_scheduler", 00:08:57.394 "framework_get_reactors", 00:08:57.394 "thread_get_io_channels", 00:08:57.394 "thread_get_pollers", 00:08:57.394 "thread_get_stats", 00:08:57.394 "framework_monitor_context_switch", 00:08:57.394 "spdk_kill_instance", 00:08:57.394 "log_enable_timestamps", 00:08:57.394 "log_get_flags", 00:08:57.394 "log_clear_flag", 00:08:57.394 "log_set_flag", 00:08:57.394 "log_get_level", 00:08:57.394 "log_set_level", 00:08:57.394 "log_get_print_level", 00:08:57.394 "log_set_print_level", 00:08:57.394 "framework_enable_cpumask_locks", 00:08:57.394 "framework_disable_cpumask_locks", 00:08:57.394 "framework_wait_init", 00:08:57.394 "framework_start_init", 00:08:57.394 "scsi_get_devices", 00:08:57.394 "bdev_get_histogram", 00:08:57.394 "bdev_enable_histogram", 00:08:57.394 "bdev_set_qos_limit", 00:08:57.394 "bdev_set_qd_sampling_period", 00:08:57.394 "bdev_get_bdevs", 00:08:57.394 "bdev_reset_iostat", 00:08:57.394 "bdev_get_iostat", 00:08:57.394 "bdev_examine", 00:08:57.394 "bdev_wait_for_examine", 00:08:57.394 "bdev_set_options", 00:08:57.394 "notify_get_notifications", 00:08:57.394 "notify_get_types", 00:08:57.394 "accel_get_stats", 00:08:57.394 "accel_set_options", 00:08:57.395 "accel_set_driver", 00:08:57.395 "accel_crypto_key_destroy", 00:08:57.395 "accel_crypto_keys_get", 00:08:57.395 "accel_crypto_key_create", 00:08:57.395 "accel_assign_opc", 00:08:57.395 "accel_get_module_info", 00:08:57.395 "accel_get_opc_assignments", 00:08:57.395 "vmd_rescan", 00:08:57.395 "vmd_remove_device", 00:08:57.395 "vmd_enable", 00:08:57.395 "sock_get_default_impl", 00:08:57.395 "sock_set_default_impl", 00:08:57.395 "sock_impl_set_options", 00:08:57.395 "sock_impl_get_options", 00:08:57.395 "iobuf_get_stats", 00:08:57.395 "iobuf_set_options", 00:08:57.395 "framework_get_pci_devices", 00:08:57.395 "framework_get_config", 00:08:57.395 "framework_get_subsystems", 00:08:57.395 "trace_get_info", 00:08:57.395 "trace_get_tpoint_group_mask", 00:08:57.395 "trace_disable_tpoint_group", 00:08:57.395 "trace_enable_tpoint_group", 00:08:57.395 "trace_clear_tpoint_mask", 00:08:57.395 "trace_set_tpoint_mask", 00:08:57.395 "keyring_get_keys", 00:08:57.395 "spdk_get_version", 00:08:57.395 "rpc_get_methods" 00:08:57.395 ] 00:08:57.395 08:22:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.395 08:22:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:57.395 08:22:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2192319 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2192319 ']' 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2192319 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2192319 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2192319' 00:08:57.395 killing process with pid 2192319 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2192319 00:08:57.395 08:22:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2192319 00:09:01.586 00:09:01.586 real 0m5.947s 00:09:01.586 user 0m10.014s 00:09:01.586 sys 0m1.017s 00:09:01.586 08:22:13 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.586 08:22:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.586 ************************************ 00:09:01.586 END TEST spdkcli_tcp 00:09:01.586 ************************************ 00:09:01.586 08:22:13 -- common/autotest_common.sh@1142 -- # return 0 00:09:01.586 08:22:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:01.586 08:22:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.586 08:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.586 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:09:01.586 ************************************ 00:09:01.586 START TEST dpdk_mem_utility 00:09:01.586 ************************************ 00:09:01.586 08:22:13 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:01.586 * Looking for test storage... 00:09:01.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:01.586 08:22:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:01.586 08:22:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2193051 00:09:01.586 08:22:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:01.586 08:22:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2193051 00:09:01.586 08:22:13 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2193051 ']' 00:09:01.586 08:22:13 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.586 08:22:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.586 08:22:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.586 08:22:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.586 08:22:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:01.586 [2024-07-23 08:22:13.694492] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:01.586 [2024-07-23 08:22:13.694820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193051 ] 00:09:01.586 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.586 [2024-07-23 08:22:13.981503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.156 [2024-07-23 08:22:14.477036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.453 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.453 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:09:05.453 08:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:05.453 08:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:05.453 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.453 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:05.453 { 00:09:05.453 "filename": "/tmp/spdk_mem_dump.txt" 00:09:05.453 } 00:09:05.453 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.453 08:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:05.453 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:05.453 1 heaps totaling size 820.000000 MiB 00:09:05.453 size: 820.000000 MiB heap id: 0 00:09:05.453 end heaps---------- 00:09:05.453 8 mempools totaling size 598.116089 MiB 00:09:05.453 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:05.453 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:05.453 size: 84.521057 MiB name: bdev_io_2193051 00:09:05.453 size: 51.011292 MiB name: evtpool_2193051 00:09:05.453 size: 50.003479 MiB name: msgpool_2193051 00:09:05.453 size: 21.763794 MiB name: PDU_Pool 00:09:05.453 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:05.453 size: 0.026123 MiB name: Session_Pool 00:09:05.453 end mempools------- 00:09:05.453 6 memzones totaling size 4.142822 MiB 00:09:05.453 size: 1.000366 MiB name: RG_ring_0_2193051 00:09:05.453 size: 1.000366 MiB name: RG_ring_1_2193051 00:09:05.453 size: 1.000366 MiB name: RG_ring_4_2193051 00:09:05.453 size: 1.000366 MiB name: RG_ring_5_2193051 00:09:05.453 size: 0.125366 MiB name: RG_ring_2_2193051 00:09:05.453 size: 0.015991 MiB name: RG_ring_3_2193051 00:09:05.453 end memzones------- 00:09:05.453 08:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:05.453 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:09:05.453 list of free elements. size: 18.514832 MiB 00:09:05.453 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:05.453 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:05.453 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:05.453 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:05.453 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:05.453 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:05.453 element at address: 0x200019600000 with size: 0.999329 MiB 00:09:05.453 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:05.453 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:05.453 element at address: 0x200018e00000 with size: 0.959900 MiB 00:09:05.453 element at address: 0x200019900040 with size: 0.937256 MiB 00:09:05.453 element at address: 0x200000200000 with size: 0.840942 MiB 00:09:05.453 element at address: 0x20001b000000 with size: 0.583191 MiB 00:09:05.453 element at address: 0x200019200000 with size: 0.491150 MiB 00:09:05.453 element at address: 0x200019a00000 with size: 0.485657 MiB 00:09:05.453 element at address: 0x200013800000 with size: 0.470581 MiB 00:09:05.453 element at address: 0x200028400000 with size: 0.411072 MiB 00:09:05.453 element at address: 0x200003a00000 with size: 0.356140 MiB 00:09:05.453 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:09:05.453 list of standard malloc elements. size: 199.220764 MiB 00:09:05.453 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:05.453 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:05.453 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:05.453 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:05.453 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:05.453 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:05.453 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:05.453 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:05.453 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:09:05.453 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:09:05.453 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:05.453 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:05.453 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:05.453 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:05.453 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:05.453 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:05.453 list of memzone associated elements. size: 602.264404 MiB 00:09:05.453 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:05.453 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:05.453 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:05.453 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:05.453 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:05.453 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2193051_0 00:09:05.453 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:05.453 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2193051_0 00:09:05.453 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:05.453 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2193051_0 00:09:05.453 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:05.453 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:05.453 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:05.453 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:05.453 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:05.453 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2193051 00:09:05.453 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:05.453 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2193051 00:09:05.453 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:05.453 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2193051 00:09:05.453 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:05.453 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:05.454 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:05.454 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:05.454 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:05.454 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:05.454 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:05.454 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:05.454 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:05.454 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2193051 00:09:05.454 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:05.454 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2193051 00:09:05.454 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:05.454 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2193051 00:09:05.454 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:05.454 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2193051 00:09:05.454 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:05.454 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2193051 00:09:05.454 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:09:05.454 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:05.454 element at address: 0x200013878780 with size: 0.500549 MiB 00:09:05.454 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:05.454 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:09:05.454 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:05.454 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:05.454 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2193051 00:09:05.454 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:09:05.454 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:05.454 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:09:05.454 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:05.454 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:05.454 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2193051 00:09:05.454 element at address: 0x20002846f540 with size: 0.002502 MiB 00:09:05.454 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:05.454 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:09:05.454 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2193051 00:09:05.454 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:05.454 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2193051 00:09:05.454 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:09:05.454 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:05.454 08:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:05.454 08:22:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2193051 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2193051 ']' 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2193051 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2193051 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2193051' 00:09:05.454 killing process with pid 2193051 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2193051 00:09:05.454 08:22:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2193051 00:09:10.778 00:09:10.778 real 0m8.967s 00:09:10.778 user 0m9.679s 00:09:10.778 sys 0m1.232s 00:09:10.778 08:22:22 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.778 08:22:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:10.778 ************************************ 00:09:10.778 END TEST dpdk_mem_utility 00:09:10.778 ************************************ 00:09:10.778 08:22:22 -- common/autotest_common.sh@1142 -- # return 0 00:09:10.778 08:22:22 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:10.778 08:22:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.778 08:22:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.778 08:22:22 -- common/autotest_common.sh@10 -- # set +x 00:09:10.778 ************************************ 00:09:10.778 START TEST event 00:09:10.778 ************************************ 00:09:10.779 08:22:22 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:10.779 * Looking for test storage... 00:09:10.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:10.779 08:22:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:10.779 08:22:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:10.779 08:22:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:10.779 08:22:22 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:10.779 08:22:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.779 08:22:22 event -- common/autotest_common.sh@10 -- # set +x 00:09:10.779 ************************************ 00:09:10.779 START TEST event_perf 00:09:10.779 ************************************ 00:09:10.779 08:22:22 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:10.779 Running I/O for 1 seconds...[2024-07-23 08:22:22.576989] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:10.779 [2024-07-23 08:22:22.577285] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194169 ] 00:09:10.779 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.779 [2024-07-23 08:22:22.880546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.042 [2024-07-23 08:22:23.387517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.042 [2024-07-23 08:22:23.387580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.042 [2024-07-23 08:22:23.387632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.042 [2024-07-23 08:22:23.387646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.948 Running I/O for 1 seconds... 00:09:12.948 lcore 0: 153826 00:09:12.948 lcore 1: 153828 00:09:12.948 lcore 2: 153826 00:09:12.948 lcore 3: 153827 00:09:12.948 done. 00:09:12.948 00:09:12.948 real 0m2.737s 00:09:12.948 user 0m5.357s 00:09:12.948 sys 0m0.345s 00:09:12.948 08:22:25 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.948 08:22:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:12.948 ************************************ 00:09:12.948 END TEST event_perf 00:09:12.948 ************************************ 00:09:12.948 08:22:25 event -- common/autotest_common.sh@1142 -- # return 0 00:09:12.948 08:22:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:12.948 08:22:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:12.948 08:22:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.948 08:22:25 event -- common/autotest_common.sh@10 -- # set +x 00:09:12.948 ************************************ 00:09:12.948 START TEST event_reactor 00:09:12.948 ************************************ 00:09:12.948 08:22:25 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:12.948 [2024-07-23 08:22:25.395581] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:12.948 [2024-07-23 08:22:25.395855] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194479 ] 00:09:13.208 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.208 [2024-07-23 08:22:25.695730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.778 [2024-07-23 08:22:26.169931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.691 test_start 00:09:15.691 oneshot 00:09:15.691 tick 100 00:09:15.691 tick 100 00:09:15.691 tick 250 00:09:15.691 tick 100 00:09:15.691 tick 100 00:09:15.691 tick 250 00:09:15.691 tick 100 00:09:15.691 tick 500 00:09:15.691 tick 100 00:09:15.691 tick 100 00:09:15.691 tick 250 00:09:15.691 tick 100 00:09:15.691 tick 100 00:09:15.691 test_end 00:09:15.691 00:09:15.691 real 0m2.723s 00:09:15.691 user 0m2.362s 00:09:15.691 sys 0m0.335s 00:09:15.691 08:22:28 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.691 08:22:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:15.691 ************************************ 00:09:15.691 END TEST event_reactor 00:09:15.691 ************************************ 00:09:15.691 08:22:28 event -- common/autotest_common.sh@1142 -- # return 0 00:09:15.691 08:22:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:15.691 08:22:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:15.691 08:22:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.691 08:22:28 event -- common/autotest_common.sh@10 -- # set +x 00:09:15.691 ************************************ 00:09:15.691 START TEST event_reactor_perf 00:09:15.691 ************************************ 00:09:15.691 08:22:28 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:15.691 [2024-07-23 08:22:28.197106] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:15.691 [2024-07-23 08:22:28.197339] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194761 ] 00:09:15.951 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.211 [2024-07-23 08:22:28.489376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.470 [2024-07-23 08:22:28.991647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.376 test_start 00:09:18.376 test_end 00:09:18.376 Performance: 138571 events per second 00:09:18.376 00:09:18.376 real 0m2.706s 00:09:18.376 user 0m2.374s 00:09:18.376 sys 0m0.306s 00:09:18.376 08:22:30 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.376 08:22:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:18.376 ************************************ 00:09:18.376 END TEST event_reactor_perf 00:09:18.376 ************************************ 00:09:18.376 08:22:30 event -- common/autotest_common.sh@1142 -- # return 0 00:09:18.376 08:22:30 event -- event/event.sh@49 -- # uname -s 00:09:18.376 08:22:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:18.376 08:22:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:18.376 08:22:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.376 08:22:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.376 08:22:30 event -- common/autotest_common.sh@10 -- # set +x 00:09:18.634 ************************************ 00:09:18.634 START TEST event_scheduler 00:09:18.634 ************************************ 00:09:18.635 08:22:30 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:18.635 * Looking for test storage... 00:09:18.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:18.635 08:22:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:18.635 08:22:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2195202 00:09:18.635 08:22:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:18.635 08:22:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:18.635 08:22:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2195202 00:09:18.635 08:22:31 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2195202 ']' 00:09:18.635 08:22:31 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.635 08:22:31 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.635 08:22:31 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.635 08:22:31 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.635 08:22:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:18.635 [2024-07-23 08:22:31.148150] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:18.635 [2024-07-23 08:22:31.148418] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195202 ] 00:09:18.894 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.894 [2024-07-23 08:22:31.392220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.464 [2024-07-23 08:22:31.904130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.464 [2024-07-23 08:22:31.904213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.464 [2024-07-23 08:22:31.904271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.464 [2024-07-23 08:22:31.904278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.402 08:22:32 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.402 08:22:32 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:09:20.402 08:22:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:20.402 08:22:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.402 08:22:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:20.402 [2024-07-23 08:22:32.635738] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:20.402 [2024-07-23 08:22:32.635809] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:20.402 [2024-07-23 08:22:32.635857] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:20.402 [2024-07-23 08:22:32.635888] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:20.402 [2024-07-23 08:22:32.635912] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:20.402 08:22:32 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.402 08:22:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:20.402 08:22:32 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.402 08:22:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 [2024-07-23 08:22:33.376254] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:20.984 08:22:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:20.984 08:22:33 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.984 08:22:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 ************************************ 00:09:20.984 START TEST scheduler_create_thread 00:09:20.984 ************************************ 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 2 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 3 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 4 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 5 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 6 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 7 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 8 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.984 9 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.984 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.245 10 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.245 08:22:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:21.504 08:22:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.504 08:22:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:21.504 08:22:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.504 08:22:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:23.409 08:22:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.409 08:22:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:23.409 08:22:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:23.409 08:22:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.409 08:22:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:24.344 08:22:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.344 00:09:24.344 real 0m3.113s 00:09:24.344 user 0m0.018s 00:09:24.344 sys 0m0.004s 00:09:24.344 08:22:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.344 08:22:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:24.344 ************************************ 00:09:24.344 END TEST scheduler_create_thread 00:09:24.344 ************************************ 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:09:24.344 08:22:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:24.344 08:22:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2195202 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2195202 ']' 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2195202 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2195202 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2195202' 00:09:24.344 killing process with pid 2195202 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2195202 00:09:24.344 08:22:36 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2195202 00:09:24.603 [2024-07-23 08:22:37.007324] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:27.168 00:09:27.168 real 0m8.131s 00:09:27.168 user 0m16.845s 00:09:27.168 sys 0m0.887s 00:09:27.168 08:22:39 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.168 08:22:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 ************************************ 00:09:27.168 END TEST event_scheduler 00:09:27.168 ************************************ 00:09:27.168 08:22:39 event -- common/autotest_common.sh@1142 -- # return 0 00:09:27.168 08:22:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:27.168 08:22:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:27.168 08:22:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:27.168 08:22:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.168 08:22:39 event -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 ************************************ 00:09:27.168 START TEST app_repeat 00:09:27.168 ************************************ 00:09:27.168 08:22:39 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2196075 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:27.168 08:22:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2196075' 00:09:27.168 Process app_repeat pid: 2196075 00:09:27.169 08:22:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:27.169 08:22:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:27.169 spdk_app_start Round 0 00:09:27.169 08:22:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2196075 /var/tmp/spdk-nbd.sock 00:09:27.169 08:22:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2196075 ']' 00:09:27.169 08:22:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:27.169 08:22:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.169 08:22:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:27.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:27.169 08:22:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.169 08:22:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 [2024-07-23 08:22:39.230891] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:27.169 [2024-07-23 08:22:39.231229] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196075 ] 00:09:27.169 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.169 [2024-07-23 08:22:39.532467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.737 [2024-07-23 08:22:40.024938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.737 [2024-07-23 08:22:40.024971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.672 08:22:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.672 08:22:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:28.672 08:22:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:29.239 Malloc0 00:09:29.239 08:22:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:29.497 Malloc1 00:09:29.497 08:22:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:29.498 08:22:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:30.064 /dev/nbd0 00:09:30.064 08:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:30.064 08:22:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:30.064 08:22:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:30.064 1+0 records in 00:09:30.064 1+0 records out 00:09:30.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227149 s, 18.0 MB/s 00:09:30.322 08:22:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:30.322 08:22:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:30.322 08:22:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:30.322 08:22:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:30.322 08:22:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:30.322 08:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.322 08:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.323 08:22:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:30.890 /dev/nbd1 00:09:30.890 08:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:30.890 08:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:30.890 1+0 records in 00:09:30.890 1+0 records out 00:09:30.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627924 s, 6.5 MB/s 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:30.890 08:22:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:30.890 08:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.890 08:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.890 08:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:30.890 08:22:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.890 08:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:31.149 { 00:09:31.149 "nbd_device": "/dev/nbd0", 00:09:31.149 "bdev_name": "Malloc0" 00:09:31.149 }, 00:09:31.149 { 00:09:31.149 "nbd_device": "/dev/nbd1", 00:09:31.149 "bdev_name": "Malloc1" 00:09:31.149 } 00:09:31.149 ]' 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:31.149 { 00:09:31.149 "nbd_device": "/dev/nbd0", 00:09:31.149 "bdev_name": "Malloc0" 00:09:31.149 }, 00:09:31.149 { 00:09:31.149 "nbd_device": "/dev/nbd1", 00:09:31.149 "bdev_name": "Malloc1" 00:09:31.149 } 00:09:31.149 ]' 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:31.149 /dev/nbd1' 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:31.149 /dev/nbd1' 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:31.149 256+0 records in 00:09:31.149 256+0 records out 00:09:31.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515752 s, 203 MB/s 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.149 08:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:31.408 256+0 records in 00:09:31.408 256+0 records out 00:09:31.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.038074 s, 27.5 MB/s 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:31.408 256+0 records in 00:09:31.408 256+0 records out 00:09:31.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0416478 s, 25.2 MB/s 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.408 08:22:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.974 08:22:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.232 08:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.798 08:22:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:32.798 08:22:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.798 08:22:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:33.057 08:22:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:33.057 08:22:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:33.992 08:22:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:35.898 [2024-07-23 08:22:48.322640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.467 [2024-07-23 08:22:48.798879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.467 [2024-07-23 08:22:48.798883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.726 [2024-07-23 08:22:49.062062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:36.726 [2024-07-23 08:22:49.062148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:36.984 08:22:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:36.984 08:22:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:36.984 spdk_app_start Round 1 00:09:36.984 08:22:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2196075 /var/tmp/spdk-nbd.sock 00:09:36.984 08:22:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2196075 ']' 00:09:36.984 08:22:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:36.984 08:22:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.984 08:22:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:36.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:36.984 08:22:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.984 08:22:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:37.554 08:22:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.554 08:22:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:37.554 08:22:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:38.123 Malloc0 00:09:38.123 08:22:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:39.058 Malloc1 00:09:39.058 08:22:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:39.058 /dev/nbd0 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:39.058 08:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.058 08:22:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.316 1+0 records in 00:09:39.316 1+0 records out 00:09:39.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193468 s, 21.2 MB/s 00:09:39.317 08:22:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.317 08:22:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:39.317 08:22:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.317 08:22:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.317 08:22:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:39.317 08:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.317 08:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.317 08:22:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:39.575 /dev/nbd1 00:09:39.575 08:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:39.575 08:22:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:39.575 1+0 records in 00:09:39.575 1+0 records out 00:09:39.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477003 s, 8.6 MB/s 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:39.575 08:22:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:39.575 08:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.575 08:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:39.575 08:22:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.575 08:22:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.575 08:22:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:40.142 { 00:09:40.142 "nbd_device": "/dev/nbd0", 00:09:40.142 "bdev_name": "Malloc0" 00:09:40.142 }, 00:09:40.142 { 00:09:40.142 "nbd_device": "/dev/nbd1", 00:09:40.142 "bdev_name": "Malloc1" 00:09:40.142 } 00:09:40.142 ]' 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:40.142 { 00:09:40.142 "nbd_device": "/dev/nbd0", 00:09:40.142 "bdev_name": "Malloc0" 00:09:40.142 }, 00:09:40.142 { 00:09:40.142 "nbd_device": "/dev/nbd1", 00:09:40.142 "bdev_name": "Malloc1" 00:09:40.142 } 00:09:40.142 ]' 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:40.142 /dev/nbd1' 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:40.142 /dev/nbd1' 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:40.142 256+0 records in 00:09:40.142 256+0 records out 00:09:40.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00865435 s, 121 MB/s 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.142 08:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:40.400 256+0 records in 00:09:40.400 256+0 records out 00:09:40.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0380279 s, 27.6 MB/s 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:40.400 256+0 records in 00:09:40.400 256+0 records out 00:09:40.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0404579 s, 25.9 MB/s 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.400 08:22:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.659 08:22:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.226 08:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:41.792 08:22:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:41.792 08:22:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:42.774 08:22:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:44.688 [2024-07-23 08:22:57.018305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.254 [2024-07-23 08:22:57.490622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.254 [2024-07-23 08:22:57.490624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.254 [2024-07-23 08:22:57.754454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:45.254 [2024-07-23 08:22:57.754544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:45.511 08:22:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:45.511 08:22:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:45.511 spdk_app_start Round 2 00:09:45.511 08:22:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2196075 /var/tmp/spdk-nbd.sock 00:09:45.511 08:22:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2196075 ']' 00:09:45.511 08:22:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:45.511 08:22:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.511 08:22:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:45.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:45.511 08:22:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.511 08:22:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.074 08:22:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.074 08:22:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:46.074 08:22:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:47.007 Malloc0 00:09:47.007 08:22:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:47.573 Malloc1 00:09:47.573 08:22:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.573 08:22:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:47.831 /dev/nbd0 00:09:47.831 08:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:47.831 08:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:47.831 1+0 records in 00:09:47.831 1+0 records out 00:09:47.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323958 s, 12.6 MB/s 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:47.831 08:23:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:47.831 08:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.831 08:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.831 08:23:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:48.396 /dev/nbd1 00:09:48.396 08:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:48.396 08:23:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.396 1+0 records in 00:09:48.396 1+0 records out 00:09:48.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333795 s, 12.3 MB/s 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:48.396 08:23:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:48.396 08:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.396 08:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.396 08:23:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.396 08:23:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.396 08:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.962 08:23:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.962 { 00:09:48.962 "nbd_device": "/dev/nbd0", 00:09:48.962 "bdev_name": "Malloc0" 00:09:48.962 }, 00:09:48.962 { 00:09:48.962 "nbd_device": "/dev/nbd1", 00:09:48.962 "bdev_name": "Malloc1" 00:09:48.962 } 00:09:48.962 ]' 00:09:48.962 08:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.962 { 00:09:48.962 "nbd_device": "/dev/nbd0", 00:09:48.962 "bdev_name": "Malloc0" 00:09:48.962 }, 00:09:48.962 { 00:09:48.962 "nbd_device": "/dev/nbd1", 00:09:48.962 "bdev_name": "Malloc1" 00:09:48.962 } 00:09:48.962 ]' 00:09:48.962 08:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:49.220 /dev/nbd1' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:49.220 /dev/nbd1' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:49.220 256+0 records in 00:09:49.220 256+0 records out 00:09:49.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010031 s, 105 MB/s 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:49.220 256+0 records in 00:09:49.220 256+0 records out 00:09:49.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0387499 s, 27.1 MB/s 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:49.220 256+0 records in 00:09:49.220 256+0 records out 00:09:49.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0422435 s, 24.8 MB/s 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:49.220 08:23:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.221 08:23:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.786 08:23:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.352 08:23:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:51.293 08:23:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:51.293 08:23:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:52.227 08:23:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:54.131 [2024-07-23 08:23:06.503729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:54.701 [2024-07-23 08:23:06.975234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.701 [2024-07-23 08:23:06.975237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.959 [2024-07-23 08:23:07.240320] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.959 [2024-07-23 08:23:07.240400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:54.959 08:23:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2196075 /var/tmp/spdk-nbd.sock 00:09:54.959 08:23:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2196075 ']' 00:09:54.959 08:23:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:54.959 08:23:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.959 08:23:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:54.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:54.959 08:23:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.959 08:23:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:55.526 08:23:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.526 08:23:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:55.526 08:23:07 event.app_repeat -- event/event.sh@39 -- # killprocess 2196075 00:09:55.526 08:23:07 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2196075 ']' 00:09:55.526 08:23:07 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2196075 00:09:55.526 08:23:07 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:09:55.526 08:23:07 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.526 08:23:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2196075 00:09:55.785 08:23:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:55.785 08:23:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:55.785 08:23:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2196075' 00:09:55.785 killing process with pid 2196075 00:09:55.785 08:23:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2196075 00:09:55.785 08:23:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2196075 00:09:57.691 spdk_app_start is called in Round 0. 00:09:57.691 Shutdown signal received, stop current app iteration 00:09:57.691 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:09:57.691 spdk_app_start is called in Round 1. 00:09:57.691 Shutdown signal received, stop current app iteration 00:09:57.691 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:09:57.691 spdk_app_start is called in Round 2. 00:09:57.691 Shutdown signal received, stop current app iteration 00:09:57.691 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:09:57.691 spdk_app_start is called in Round 3. 00:09:57.691 Shutdown signal received, stop current app iteration 00:09:57.691 08:23:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:57.691 08:23:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:57.691 00:09:57.691 real 0m30.776s 00:09:57.692 user 1m5.831s 00:09:57.692 sys 0m6.283s 00:09:57.692 08:23:09 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.692 08:23:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:57.692 ************************************ 00:09:57.692 END TEST app_repeat 00:09:57.692 ************************************ 00:09:57.692 08:23:09 event -- common/autotest_common.sh@1142 -- # return 0 00:09:57.692 08:23:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:57.692 08:23:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:57.692 08:23:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.692 08:23:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.692 08:23:09 event -- common/autotest_common.sh@10 -- # set +x 00:09:57.692 ************************************ 00:09:57.692 START TEST cpu_locks 00:09:57.692 ************************************ 00:09:57.692 08:23:09 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:57.692 * Looking for test storage... 00:09:57.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:57.692 08:23:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:57.692 08:23:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:57.692 08:23:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:57.692 08:23:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:57.692 08:23:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.692 08:23:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.692 08:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.692 ************************************ 00:09:57.692 START TEST default_locks 00:09:57.692 ************************************ 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2199731 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2199731 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2199731 ']' 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.692 08:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.952 [2024-07-23 08:23:10.330225] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:57.952 [2024-07-23 08:23:10.330576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199731 ] 00:09:58.212 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.212 [2024-07-23 08:23:10.630640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.792 [2024-07-23 08:23:11.124079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.740 08:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.740 08:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:10:00.740 08:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2199731 00:10:00.740 08:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2199731 00:10:00.740 08:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:01.310 lslocks: write error 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2199731 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2199731 ']' 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2199731 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2199731 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2199731' 00:10:01.310 killing process with pid 2199731 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2199731 00:10:01.310 08:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2199731 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2199731 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2199731 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2199731 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2199731 ']' 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2199731) - No such process 00:10:06.592 ERROR: process (pid: 2199731) is no longer running 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:06.592 00:10:06.592 real 0m8.207s 00:10:06.592 user 0m8.375s 00:10:06.592 sys 0m1.445s 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.592 08:23:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.592 ************************************ 00:10:06.592 END TEST default_locks 00:10:06.592 ************************************ 00:10:06.592 08:23:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:06.592 08:23:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:06.592 08:23:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:06.592 08:23:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.592 08:23:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.592 ************************************ 00:10:06.592 START TEST default_locks_via_rpc 00:10:06.592 ************************************ 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2200683 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2200683 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2200683 ']' 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.592 08:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.592 [2024-07-23 08:23:18.644597] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:06.592 [2024-07-23 08:23:18.644960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200683 ] 00:10:06.592 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.592 [2024-07-23 08:23:18.945432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.162 [2024-07-23 08:23:19.438865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2200683 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2200683 00:10:09.071 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:09.330 08:23:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2200683 00:10:09.331 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2200683 ']' 00:10:09.331 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2200683 00:10:09.331 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:10:09.331 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.331 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2200683 00:10:09.591 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.591 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.591 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2200683' 00:10:09.591 killing process with pid 2200683 00:10:09.591 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2200683 00:10:09.591 08:23:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2200683 00:10:14.875 00:10:14.875 real 0m8.138s 00:10:14.875 user 0m8.336s 00:10:14.875 sys 0m1.413s 00:10:14.875 08:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.875 08:23:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.875 ************************************ 00:10:14.875 END TEST default_locks_via_rpc 00:10:14.875 ************************************ 00:10:14.875 08:23:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:14.875 08:23:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:14.875 08:23:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:14.875 08:23:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.875 08:23:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.875 ************************************ 00:10:14.875 START TEST non_locking_app_on_locked_coremask 00:10:14.875 ************************************ 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2201638 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2201638 /var/tmp/spdk.sock 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2201638 ']' 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.875 08:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.875 [2024-07-23 08:23:26.837969] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:14.875 [2024-07-23 08:23:26.838330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201638 ] 00:10:14.875 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.875 [2024-07-23 08:23:27.151468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.136 [2024-07-23 08:23:27.611955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2201906 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2201906 /var/tmp/spdk2.sock 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2201906 ']' 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:17.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.045 08:23:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:17.045 [2024-07-23 08:23:29.376725] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:17.045 [2024-07-23 08:23:29.376953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201906 ] 00:10:17.045 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.304 [2024-07-23 08:23:29.698964] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:17.304 [2024-07-23 08:23:29.699080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.242 [2024-07-23 08:23:30.639639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.537 08:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.537 08:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:21.537 08:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2201638 00:10:21.537 08:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2201638 00:10:21.537 08:23:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:22.913 lslocks: write error 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2201638 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2201638 ']' 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2201638 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2201638 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2201638' 00:10:22.913 killing process with pid 2201638 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2201638 00:10:22.913 08:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2201638 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2201906 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2201906 ']' 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2201906 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2201906 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2201906' 00:10:32.929 killing process with pid 2201906 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2201906 00:10:32.929 08:23:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2201906 00:10:37.126 00:10:37.126 real 0m22.420s 00:10:37.126 user 0m23.688s 00:10:37.126 sys 0m2.773s 00:10:37.126 08:23:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.126 08:23:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:37.126 ************************************ 00:10:37.126 END TEST non_locking_app_on_locked_coremask 00:10:37.126 ************************************ 00:10:37.126 08:23:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:37.126 08:23:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:37.126 08:23:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:37.126 08:23:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.126 08:23:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:37.126 ************************************ 00:10:37.126 START TEST locking_app_on_unlocked_coremask 00:10:37.126 ************************************ 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2204186 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2204186 /var/tmp/spdk.sock 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2204186 ']' 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.126 08:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:37.126 [2024-07-23 08:23:49.299827] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:37.126 [2024-07-23 08:23:49.300146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2204186 ] 00:10:37.126 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.126 [2024-07-23 08:23:49.605426] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:37.126 [2024-07-23 08:23:49.605493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.695 [2024-07-23 08:23:50.107433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.617 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.617 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2204460 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2204460 /var/tmp/spdk2.sock 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2204460 ']' 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:39.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.618 08:23:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:39.618 [2024-07-23 08:23:51.978957] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:39.618 [2024-07-23 08:23:51.979291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2204460 ] 00:10:39.877 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.877 [2024-07-23 08:23:52.391822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.257 [2024-07-23 08:23:53.342540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.545 08:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.545 08:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:44.545 08:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2204460 00:10:44.545 08:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2204460 00:10:44.545 08:23:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:45.482 lslocks: write error 00:10:45.482 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2204186 00:10:45.482 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2204186 ']' 00:10:45.482 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2204186 00:10:45.482 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:45.482 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:45.482 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2204186 00:10:45.482 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:45.483 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:45.483 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2204186' 00:10:45.483 killing process with pid 2204186 00:10:45.483 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2204186 00:10:45.483 08:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2204186 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2204460 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2204460 ']' 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2204460 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2204460 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2204460' 00:10:55.465 killing process with pid 2204460 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2204460 00:10:55.465 08:24:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2204460 00:10:59.704 00:10:59.704 real 0m22.531s 00:10:59.704 user 0m23.752s 00:10:59.704 sys 0m2.823s 00:10:59.704 08:24:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.704 08:24:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:59.704 ************************************ 00:10:59.704 END TEST locking_app_on_unlocked_coremask 00:10:59.704 ************************************ 00:10:59.704 08:24:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:59.704 08:24:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:59.705 08:24:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:59.705 08:24:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.705 08:24:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 ************************************ 00:10:59.705 START TEST locking_app_on_locked_coremask 00:10:59.705 ************************************ 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2207243 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2207243 /var/tmp/spdk.sock 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2207243 ']' 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.705 08:24:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:59.705 [2024-07-23 08:24:11.851178] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:59.705 [2024-07-23 08:24:11.851375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207243 ] 00:10:59.705 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.705 [2024-07-23 08:24:12.053161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.274 [2024-07-23 08:24:12.545947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2207629 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2207629 /var/tmp/spdk2.sock 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2207629 /var/tmp/spdk2.sock 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2207629 /var/tmp/spdk2.sock 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2207629 ']' 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:02.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.815 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:02.815 [2024-07-23 08:24:15.117448] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:02.815 [2024-07-23 08:24:15.117616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207629 ] 00:11:02.815 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.075 [2024-07-23 08:24:15.436544] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2207243 has claimed it. 00:11:03.075 [2024-07-23 08:24:15.436680] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:03.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2207629) - No such process 00:11:03.644 ERROR: process (pid: 2207629) is no longer running 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2207243 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2207243 00:11:03.644 08:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:04.211 lslocks: write error 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2207243 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2207243 ']' 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2207243 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2207243 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2207243' 00:11:04.211 killing process with pid 2207243 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2207243 00:11:04.211 08:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2207243 00:11:09.493 00:11:09.493 real 0m9.635s 00:11:09.493 user 0m10.252s 00:11:09.493 sys 0m1.659s 00:11:09.493 08:24:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.493 08:24:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 ************************************ 00:11:09.493 END TEST locking_app_on_locked_coremask 00:11:09.493 ************************************ 00:11:09.493 08:24:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:09.493 08:24:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:09.493 08:24:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:09.493 08:24:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.493 08:24:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 ************************************ 00:11:09.493 START TEST locking_overlapped_coremask 00:11:09.493 ************************************ 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2208447 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2208447 /var/tmp/spdk.sock 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2208447 ']' 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.493 08:24:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:09.493 [2024-07-23 08:24:21.659115] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:09.493 [2024-07-23 08:24:21.659467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208447 ] 00:11:09.493 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.493 [2024-07-23 08:24:21.960202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.062 [2024-07-23 08:24:22.463864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.062 [2024-07-23 08:24:22.463911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.062 [2024-07-23 08:24:22.463921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2208597 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2208597 /var/tmp/spdk2.sock 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2208597 /var/tmp/spdk2.sock 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2208597 /var/tmp/spdk2.sock 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2208597 ']' 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:11.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.441 08:24:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:11.441 [2024-07-23 08:24:23.737338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:11.441 [2024-07-23 08:24:23.737550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208597 ] 00:11:11.441 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.700 [2024-07-23 08:24:24.034667] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2208447 has claimed it. 00:11:11.700 [2024-07-23 08:24:24.034760] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:12.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2208597) - No such process 00:11:12.267 ERROR: process (pid: 2208597) is no longer running 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:12.267 08:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2208447 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2208447 ']' 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2208447 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2208447 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2208447' 00:11:12.268 killing process with pid 2208447 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2208447 00:11:12.268 08:24:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2208447 00:11:15.557 00:11:15.557 real 0m6.480s 00:11:15.557 user 0m15.951s 00:11:15.557 sys 0m1.199s 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.557 ************************************ 00:11:15.557 END TEST locking_overlapped_coremask 00:11:15.557 ************************************ 00:11:15.557 08:24:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:15.557 08:24:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:15.557 08:24:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:15.557 08:24:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.557 08:24:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:15.557 ************************************ 00:11:15.557 START TEST locking_overlapped_coremask_via_rpc 00:11:15.557 ************************************ 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2209157 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2209157 /var/tmp/spdk.sock 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2209157 ']' 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.557 08:24:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.816 [2024-07-23 08:24:28.210747] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:15.816 [2024-07-23 08:24:28.211101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2209157 ] 00:11:16.076 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.076 [2024-07-23 08:24:28.507662] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:16.076 [2024-07-23 08:24:28.507774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.645 [2024-07-23 08:24:28.999546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.645 [2024-07-23 08:24:28.999601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.645 [2024-07-23 08:24:28.999605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2209424 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2209424 /var/tmp/spdk2.sock 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2209424 ']' 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:18.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.021 08:24:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.021 [2024-07-23 08:24:30.326718] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:18.021 [2024-07-23 08:24:30.327050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2209424 ] 00:11:18.021 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.280 [2024-07-23 08:24:30.645636] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:18.280 [2024-07-23 08:24:30.645710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:18.885 [2024-07-23 08:24:31.291516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.885 [2024-07-23 08:24:31.295389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.885 [2024-07-23 08:24:31.295395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.413 [2024-07-23 08:24:33.683521] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2209157 has claimed it. 00:11:21.413 request: 00:11:21.413 { 00:11:21.413 "method": "framework_enable_cpumask_locks", 00:11:21.413 "req_id": 1 00:11:21.413 } 00:11:21.413 Got JSON-RPC error response 00:11:21.413 response: 00:11:21.413 { 00:11:21.413 "code": -32603, 00:11:21.413 "message": "Failed to claim CPU core: 2" 00:11:21.413 } 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2209157 /var/tmp/spdk.sock 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2209157 ']' 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.413 08:24:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2209424 /var/tmp/spdk2.sock 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2209424 ']' 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:21.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.978 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.544 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.544 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:22.544 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:22.544 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:22.545 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:22.545 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:22.545 00:11:22.545 real 0m6.806s 00:11:22.545 user 0m3.189s 00:11:22.545 sys 0m0.478s 00:11:22.545 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.545 08:24:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.545 ************************************ 00:11:22.545 END TEST locking_overlapped_coremask_via_rpc 00:11:22.545 ************************************ 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:22.545 08:24:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:22.545 08:24:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2209157 ]] 00:11:22.545 08:24:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2209157 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2209157 ']' 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2209157 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2209157 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2209157' 00:11:22.545 killing process with pid 2209157 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2209157 00:11:22.545 08:24:34 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2209157 00:11:25.832 08:24:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2209424 ]] 00:11:25.832 08:24:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2209424 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2209424 ']' 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2209424 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2209424 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2209424' 00:11:25.832 killing process with pid 2209424 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2209424 00:11:25.832 08:24:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2209424 00:11:31.117 08:24:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:31.117 08:24:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:31.117 08:24:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2209157 ]] 00:11:31.117 08:24:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2209157 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2209157 ']' 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2209157 00:11:31.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2209157) - No such process 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2209157 is not found' 00:11:31.117 Process with pid 2209157 is not found 00:11:31.117 08:24:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2209424 ]] 00:11:31.117 08:24:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2209424 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2209424 ']' 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2209424 00:11:31.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2209424) - No such process 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2209424 is not found' 00:11:31.117 Process with pid 2209424 is not found 00:11:31.117 08:24:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:31.117 00:11:31.117 real 1m33.594s 00:11:31.117 user 2m28.105s 00:11:31.117 sys 0m13.729s 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.117 08:24:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:31.117 ************************************ 00:11:31.117 END TEST cpu_locks 00:11:31.117 ************************************ 00:11:31.117 08:24:43 event -- common/autotest_common.sh@1142 -- # return 0 00:11:31.117 00:11:31.117 real 2m21.217s 00:11:31.117 user 4m1.083s 00:11:31.117 sys 0m22.256s 00:11:31.117 08:24:43 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.117 08:24:43 event -- common/autotest_common.sh@10 -- # set +x 00:11:31.117 ************************************ 00:11:31.117 END TEST event 00:11:31.117 ************************************ 00:11:31.377 08:24:43 -- common/autotest_common.sh@1142 -- # return 0 00:11:31.377 08:24:43 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:31.377 08:24:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:31.377 08:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.377 08:24:43 -- common/autotest_common.sh@10 -- # set +x 00:11:31.377 ************************************ 00:11:31.377 START TEST thread 00:11:31.377 ************************************ 00:11:31.377 08:24:43 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:31.377 * Looking for test storage... 00:11:31.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:11:31.377 08:24:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:31.377 08:24:43 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:31.377 08:24:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.377 08:24:43 thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.377 ************************************ 00:11:31.377 START TEST thread_poller_perf 00:11:31.377 ************************************ 00:11:31.377 08:24:43 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:31.377 [2024-07-23 08:24:43.859272] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:31.377 [2024-07-23 08:24:43.859481] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210998 ] 00:11:31.637 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.637 [2024-07-23 08:24:44.070902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.207 [2024-07-23 08:24:44.556292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.207 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:34.113 ====================================== 00:11:34.113 busy:2732829273 (cyc) 00:11:34.113 total_run_count: 137000 00:11:34.113 tsc_hz: 2700000000 (cyc) 00:11:34.113 ====================================== 00:11:34.113 poller_cost: 19947 (cyc), 7387 (nsec) 00:11:34.113 00:11:34.113 real 0m2.628s 00:11:34.113 user 0m2.362s 00:11:34.113 sys 0m0.243s 00:11:34.113 08:24:46 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.113 08:24:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:34.113 ************************************ 00:11:34.113 END TEST thread_poller_perf 00:11:34.113 ************************************ 00:11:34.113 08:24:46 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:34.113 08:24:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:34.113 08:24:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:34.113 08:24:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.114 08:24:46 thread -- common/autotest_common.sh@10 -- # set +x 00:11:34.114 ************************************ 00:11:34.114 START TEST thread_poller_perf 00:11:34.114 ************************************ 00:11:34.114 08:24:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:34.114 [2024-07-23 08:24:46.595860] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:34.114 [2024-07-23 08:24:46.596147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211388 ] 00:11:34.373 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.633 [2024-07-23 08:24:46.905486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.893 [2024-07-23 08:24:47.408701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.893 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:36.799 ====================================== 00:11:36.799 busy:2709777888 (cyc) 00:11:36.799 total_run_count: 1770000 00:11:36.799 tsc_hz: 2700000000 (cyc) 00:11:36.799 ====================================== 00:11:36.799 poller_cost: 1530 (cyc), 566 (nsec) 00:11:36.799 00:11:36.799 real 0m2.730s 00:11:36.799 user 0m2.362s 00:11:36.799 sys 0m0.339s 00:11:36.799 08:24:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.799 08:24:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 ************************************ 00:11:36.799 END TEST thread_poller_perf 00:11:36.799 ************************************ 00:11:36.799 08:24:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:36.799 08:24:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:36.799 00:11:36.799 real 0m5.604s 00:11:36.799 user 0m4.819s 00:11:36.799 sys 0m0.748s 00:11:36.799 08:24:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.799 08:24:49 thread -- common/autotest_common.sh@10 -- # set +x 00:11:36.799 ************************************ 00:11:36.799 END TEST thread 00:11:36.799 ************************************ 00:11:36.799 08:24:49 -- common/autotest_common.sh@1142 -- # return 0 00:11:36.799 08:24:49 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:11:36.799 08:24:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:37.058 08:24:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.058 08:24:49 -- common/autotest_common.sh@10 -- # set +x 00:11:37.058 ************************************ 00:11:37.058 START TEST accel 00:11:37.058 ************************************ 00:11:37.058 08:24:49 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:11:37.058 * Looking for test storage... 00:11:37.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:11:37.058 08:24:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:37.058 08:24:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:37.058 08:24:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:37.058 08:24:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2211724 00:11:37.058 08:24:49 accel -- accel/accel.sh@63 -- # waitforlisten 2211724 00:11:37.058 08:24:49 accel -- common/autotest_common.sh@829 -- # '[' -z 2211724 ']' 00:11:37.059 08:24:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.059 08:24:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:37.059 08:24:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:37.059 08:24:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.059 08:24:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:37.059 08:24:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.059 08:24:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:37.059 08:24:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.059 08:24:49 accel -- common/autotest_common.sh@10 -- # set +x 00:11:37.059 08:24:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.059 08:24:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.059 08:24:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:37.059 08:24:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:37.059 08:24:49 accel -- accel/accel.sh@41 -- # jq -r . 00:11:37.317 [2024-07-23 08:24:49.666456] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:37.317 [2024-07-23 08:24:49.666788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211724 ] 00:11:37.579 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.579 [2024-07-23 08:24:49.961171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.181 [2024-07-23 08:24:50.439994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.722 08:24:52 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.722 08:24:52 accel -- common/autotest_common.sh@862 -- # return 0 00:11:40.722 08:24:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:40.722 08:24:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:40.722 08:24:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:40.722 08:24:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:11:40.722 08:24:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:40.722 08:24:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:40.722 08:24:52 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.722 08:24:52 accel -- common/autotest_common.sh@10 -- # set +x 00:11:40.722 08:24:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:40.722 08:24:52 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.722 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.722 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.722 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.723 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.723 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.723 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.723 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.723 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.723 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.723 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.723 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.723 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.723 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.723 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.723 08:24:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # IFS== 00:11:40.723 08:24:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:40.723 08:24:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:40.723 08:24:53 accel -- accel/accel.sh@75 -- # killprocess 2211724 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@948 -- # '[' -z 2211724 ']' 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@952 -- # kill -0 2211724 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@953 -- # uname 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2211724 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2211724' 00:11:40.723 killing process with pid 2211724 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@967 -- # kill 2211724 00:11:40.723 08:24:53 accel -- common/autotest_common.sh@972 -- # wait 2211724 00:11:46.001 08:24:57 accel -- accel/accel.sh@76 -- # trap - ERR 00:11:46.001 08:24:57 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:11:46.001 08:24:57 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.001 08:24:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.001 08:24:57 accel -- common/autotest_common.sh@10 -- # set +x 00:11:46.001 08:24:57 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:11:46.001 08:24:57 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:11:46.001 08:24:58 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.001 08:24:58 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:11:46.001 08:24:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:46.001 08:24:58 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:46.001 08:24:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:46.001 08:24:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.001 08:24:58 accel -- common/autotest_common.sh@10 -- # set +x 00:11:46.001 ************************************ 00:11:46.001 START TEST accel_missing_filename 00:11:46.001 ************************************ 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.001 08:24:58 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:11:46.001 08:24:58 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:11:46.001 [2024-07-23 08:24:58.236626] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:46.001 [2024-07-23 08:24:58.236918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212695 ] 00:11:46.001 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.260 [2024-07-23 08:24:58.522646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.519 [2024-07-23 08:24:59.011757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.088 [2024-07-23 08:24:59.467730] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:48.026 [2024-07-23 08:25:00.495847] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:11:48.964 A filename is required. 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:48.964 00:11:48.964 real 0m3.209s 00:11:48.964 user 0m2.835s 00:11:48.964 sys 0m0.427s 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.964 08:25:01 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:11:48.964 ************************************ 00:11:48.964 END TEST accel_missing_filename 00:11:48.964 ************************************ 00:11:48.964 08:25:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:48.964 08:25:01 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:11:48.964 08:25:01 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:48.964 08:25:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.964 08:25:01 accel -- common/autotest_common.sh@10 -- # set +x 00:11:48.964 ************************************ 00:11:48.964 START TEST accel_compress_verify 00:11:48.964 ************************************ 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.964 08:25:01 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:48.964 08:25:01 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:11:49.223 [2024-07-23 08:25:01.528690] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:49.223 [2024-07-23 08:25:01.528990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213103 ] 00:11:49.223 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.483 [2024-07-23 08:25:01.828960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.051 [2024-07-23 08:25:02.306505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.310 [2024-07-23 08:25:02.737366] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:51.689 [2024-07-23 08:25:03.806130] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:11:52.257 00:11:52.257 Compression does not support the verify option, aborting. 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:52.257 00:11:52.257 real 0m3.181s 00:11:52.257 user 0m2.785s 00:11:52.257 sys 0m0.447s 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.257 08:25:04 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:11:52.257 ************************************ 00:11:52.257 END TEST accel_compress_verify 00:11:52.257 ************************************ 00:11:52.257 08:25:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:52.257 08:25:04 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:52.257 08:25:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:52.257 08:25:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.257 08:25:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:52.257 ************************************ 00:11:52.257 START TEST accel_wrong_workload 00:11:52.257 ************************************ 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:52.257 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:11:52.257 08:25:04 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:11:52.257 Unsupported workload type: foobar 00:11:52.257 [2024-07-23 08:25:04.764824] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:52.518 accel_perf options: 00:11:52.518 [-h help message] 00:11:52.518 [-q queue depth per core] 00:11:52.518 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:52.518 [-T number of threads per core 00:11:52.518 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:52.518 [-t time in seconds] 00:11:52.518 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:52.518 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:52.518 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:52.518 [-l for compress/decompress workloads, name of uncompressed input file 00:11:52.518 [-S for crc32c workload, use this seed value (default 0) 00:11:52.518 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:52.518 [-f for fill workload, use this BYTE value (default 255) 00:11:52.518 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:52.518 [-y verify result if this switch is on] 00:11:52.518 [-a tasks to allocate per core (default: same value as -q)] 00:11:52.518 Can be used to spread operations across a wider range of memory. 00:11:52.518 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:11:52.518 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:52.518 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:52.518 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:52.518 00:11:52.518 real 0m0.104s 00:11:52.518 user 0m0.113s 00:11:52.518 sys 0m0.053s 00:11:52.518 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.518 08:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:11:52.518 ************************************ 00:11:52.518 END TEST accel_wrong_workload 00:11:52.518 ************************************ 00:11:52.518 08:25:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:52.518 08:25:04 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:52.518 08:25:04 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:52.518 08:25:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.518 08:25:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:52.518 ************************************ 00:11:52.518 START TEST accel_negative_buffers 00:11:52.518 ************************************ 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:11:52.518 08:25:04 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:11:52.518 -x option must be non-negative. 00:11:52.518 [2024-07-23 08:25:04.958200] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:52.518 accel_perf options: 00:11:52.518 [-h help message] 00:11:52.518 [-q queue depth per core] 00:11:52.518 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:52.518 [-T number of threads per core 00:11:52.518 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:52.518 [-t time in seconds] 00:11:52.518 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:52.518 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:52.518 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:52.518 [-l for compress/decompress workloads, name of uncompressed input file 00:11:52.518 [-S for crc32c workload, use this seed value (default 0) 00:11:52.518 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:52.518 [-f for fill workload, use this BYTE value (default 255) 00:11:52.518 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:52.518 [-y verify result if this switch is on] 00:11:52.518 [-a tasks to allocate per core (default: same value as -q)] 00:11:52.518 Can be used to spread operations across a wider range of memory. 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:52.518 00:11:52.518 real 0m0.108s 00:11:52.518 user 0m0.118s 00:11:52.518 sys 0m0.062s 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.518 08:25:04 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:11:52.518 ************************************ 00:11:52.518 END TEST accel_negative_buffers 00:11:52.518 ************************************ 00:11:52.518 08:25:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:52.518 08:25:05 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:52.518 08:25:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:52.518 08:25:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.518 08:25:05 accel -- common/autotest_common.sh@10 -- # set +x 00:11:52.778 ************************************ 00:11:52.778 START TEST accel_crc32c 00:11:52.778 ************************************ 00:11:52.778 08:25:05 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:52.778 08:25:05 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:52.778 [2024-07-23 08:25:05.132246] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:52.778 [2024-07-23 08:25:05.132496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213561 ] 00:11:52.778 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.037 [2024-07-23 08:25:05.406664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.606 [2024-07-23 08:25:05.893548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.865 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:53.866 08:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:57.176 08:25:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.176 00:11:57.176 real 0m4.181s 00:11:57.176 user 0m0.019s 00:11:57.176 sys 0m0.005s 00:11:57.176 08:25:09 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.176 08:25:09 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:57.176 ************************************ 00:11:57.176 END TEST accel_crc32c 00:11:57.176 ************************************ 00:11:57.176 08:25:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:57.176 08:25:09 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:11:57.176 08:25:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:57.176 08:25:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.176 08:25:09 accel -- common/autotest_common.sh@10 -- # set +x 00:11:57.176 ************************************ 00:11:57.176 START TEST accel_crc32c_C2 00:11:57.176 ************************************ 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:57.176 08:25:09 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:57.176 [2024-07-23 08:25:09.406751] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:57.176 [2024-07-23 08:25:09.407046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214042 ] 00:11:57.176 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.442 [2024-07-23 08:25:09.694937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.701 [2024-07-23 08:25:10.184954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.271 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:58.272 08:25:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:01.566 00:12:01.566 real 0m4.211s 00:12:01.566 user 0m3.738s 00:12:01.566 sys 0m0.453s 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.566 08:25:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:01.566 ************************************ 00:12:01.566 END TEST accel_crc32c_C2 00:12:01.566 ************************************ 00:12:01.566 08:25:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:01.566 08:25:13 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:01.566 08:25:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:01.566 08:25:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.566 08:25:13 accel -- common/autotest_common.sh@10 -- # set +x 00:12:01.566 ************************************ 00:12:01.566 START TEST accel_copy 00:12:01.566 ************************************ 00:12:01.566 08:25:13 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:01.566 08:25:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:01.566 [2024-07-23 08:25:13.643810] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:01.566 [2024-07-23 08:25:13.643963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214528 ] 00:12:01.566 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.566 [2024-07-23 08:25:13.842478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.827 [2024-07-23 08:25:14.318694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.397 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:02.398 08:25:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:05.693 08:25:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:05.693 00:12:05.693 real 0m4.106s 00:12:05.693 user 0m0.020s 00:12:05.693 sys 0m0.003s 00:12:05.693 08:25:17 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.693 08:25:17 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:05.693 ************************************ 00:12:05.693 END TEST accel_copy 00:12:05.693 ************************************ 00:12:05.693 08:25:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:05.693 08:25:17 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:05.693 08:25:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:05.693 08:25:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.693 08:25:17 accel -- common/autotest_common.sh@10 -- # set +x 00:12:05.693 ************************************ 00:12:05.693 START TEST accel_fill 00:12:05.693 ************************************ 00:12:05.693 08:25:17 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:05.693 08:25:17 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.694 08:25:17 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.694 08:25:17 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:05.694 08:25:17 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:05.694 08:25:17 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:05.694 [2024-07-23 08:25:17.851898] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:05.694 [2024-07-23 08:25:17.852124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214968 ] 00:12:05.694 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.694 [2024-07-23 08:25:18.125280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.264 [2024-07-23 08:25:18.605268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.525 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.784 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:06.784 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:06.785 08:25:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:10.078 08:25:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:10.078 00:12:10.078 real 0m4.157s 00:12:10.078 user 0m3.716s 00:12:10.078 sys 0m0.423s 00:12:10.078 08:25:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.078 08:25:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:10.078 ************************************ 00:12:10.078 END TEST accel_fill 00:12:10.078 ************************************ 00:12:10.078 08:25:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:10.078 08:25:21 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:10.078 08:25:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:10.078 08:25:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.078 08:25:21 accel -- common/autotest_common.sh@10 -- # set +x 00:12:10.078 ************************************ 00:12:10.078 START TEST accel_copy_crc32c 00:12:10.078 ************************************ 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:10.078 08:25:22 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:10.078 [2024-07-23 08:25:22.062504] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:10.078 [2024-07-23 08:25:22.062663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215489 ] 00:12:10.078 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.078 [2024-07-23 08:25:22.264239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.338 [2024-07-23 08:25:22.708387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.907 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.908 08:25:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:14.202 00:12:14.202 real 0m4.034s 00:12:14.202 user 0m3.650s 00:12:14.202 sys 0m0.369s 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.202 08:25:26 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:14.202 ************************************ 00:12:14.202 END TEST accel_copy_crc32c 00:12:14.202 ************************************ 00:12:14.202 08:25:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:14.202 08:25:26 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:14.202 08:25:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:14.202 08:25:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.202 08:25:26 accel -- common/autotest_common.sh@10 -- # set +x 00:12:14.202 ************************************ 00:12:14.202 START TEST accel_copy_crc32c_C2 00:12:14.202 ************************************ 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:14.202 08:25:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:14.202 [2024-07-23 08:25:26.214634] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:14.202 [2024-07-23 08:25:26.214928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215911 ] 00:12:14.202 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.202 [2024-07-23 08:25:26.509297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.772 [2024-07-23 08:25:27.012909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.033 08:25:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.330 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:18.331 00:12:18.331 real 0m4.235s 00:12:18.331 user 0m3.765s 00:12:18.331 sys 0m0.450s 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.331 08:25:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:18.331 ************************************ 00:12:18.331 END TEST accel_copy_crc32c_C2 00:12:18.331 ************************************ 00:12:18.331 08:25:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:18.331 08:25:30 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:18.331 08:25:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:18.331 08:25:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.331 08:25:30 accel -- common/autotest_common.sh@10 -- # set +x 00:12:18.331 ************************************ 00:12:18.331 START TEST accel_dualcast 00:12:18.331 ************************************ 00:12:18.331 08:25:30 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:18.331 08:25:30 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:18.331 [2024-07-23 08:25:30.522878] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:18.331 [2024-07-23 08:25:30.523177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216457 ] 00:12:18.331 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.331 [2024-07-23 08:25:30.821849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.901 [2024-07-23 08:25:31.300019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:19.470 08:25:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:22.774 08:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:22.775 08:25:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:22.775 00:12:22.775 real 0m4.259s 00:12:22.775 user 0m3.780s 00:12:22.775 sys 0m0.458s 00:12:22.775 08:25:34 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.775 08:25:34 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:22.775 ************************************ 00:12:22.775 END TEST accel_dualcast 00:12:22.775 ************************************ 00:12:22.775 08:25:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:22.775 08:25:34 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:22.775 08:25:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:22.775 08:25:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.775 08:25:34 accel -- common/autotest_common.sh@10 -- # set +x 00:12:22.775 ************************************ 00:12:22.775 START TEST accel_compare 00:12:22.775 ************************************ 00:12:22.775 08:25:34 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:22.775 08:25:34 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:22.775 [2024-07-23 08:25:34.860265] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:22.775 [2024-07-23 08:25:34.860517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216938 ] 00:12:22.775 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.775 [2024-07-23 08:25:35.159043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.344 [2024-07-23 08:25:35.661103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:23.604 08:25:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:26.896 08:25:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:26.896 00:12:26.896 real 0m4.256s 00:12:26.896 user 0m3.809s 00:12:26.896 sys 0m0.425s 00:12:26.896 08:25:39 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.896 08:25:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:26.896 ************************************ 00:12:26.896 END TEST accel_compare 00:12:26.896 ************************************ 00:12:26.896 08:25:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:26.896 08:25:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:26.896 08:25:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:26.896 08:25:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.896 08:25:39 accel -- common/autotest_common.sh@10 -- # set +x 00:12:26.896 ************************************ 00:12:26.896 START TEST accel_xor 00:12:26.896 ************************************ 00:12:26.896 08:25:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:26.896 08:25:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:26.896 [2024-07-23 08:25:39.199522] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:26.896 [2024-07-23 08:25:39.199827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217418 ] 00:12:26.896 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.156 [2024-07-23 08:25:39.502930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.726 [2024-07-23 08:25:39.987449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.986 08:25:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:31.280 00:12:31.280 real 0m4.227s 00:12:31.280 user 0m3.775s 00:12:31.280 sys 0m0.433s 00:12:31.280 08:25:43 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.280 08:25:43 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:31.280 ************************************ 00:12:31.280 END TEST accel_xor 00:12:31.280 ************************************ 00:12:31.280 08:25:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:31.280 08:25:43 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:31.280 08:25:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:31.280 08:25:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.280 08:25:43 accel -- common/autotest_common.sh@10 -- # set +x 00:12:31.280 ************************************ 00:12:31.280 START TEST accel_xor 00:12:31.280 ************************************ 00:12:31.280 08:25:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:31.280 08:25:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:31.280 [2024-07-23 08:25:43.503094] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:31.280 [2024-07-23 08:25:43.503413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217956 ] 00:12:31.280 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.540 [2024-07-23 08:25:43.806793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.108 [2024-07-23 08:25:44.323000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.367 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:32.368 08:25:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:35.683 08:25:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.683 00:12:35.683 real 0m4.321s 00:12:35.683 user 0m3.833s 00:12:35.683 sys 0m0.467s 00:12:35.683 08:25:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.683 08:25:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:35.683 ************************************ 00:12:35.683 END TEST accel_xor 00:12:35.683 ************************************ 00:12:35.683 08:25:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:35.683 08:25:47 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:35.683 08:25:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:35.683 08:25:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.683 08:25:47 accel -- common/autotest_common.sh@10 -- # set +x 00:12:35.683 ************************************ 00:12:35.683 START TEST accel_dif_verify 00:12:35.683 ************************************ 00:12:35.683 08:25:47 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:35.683 08:25:47 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:35.683 [2024-07-23 08:25:47.906895] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:35.683 [2024-07-23 08:25:47.907186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218386 ] 00:12:35.683 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.942 [2024-07-23 08:25:48.204805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.202 [2024-07-23 08:25:48.716294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:36.815 08:25:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:40.104 08:25:52 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:40.104 00:12:40.104 real 0m4.228s 00:12:40.104 user 0m3.745s 00:12:40.104 sys 0m0.478s 00:12:40.104 08:25:52 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:40.104 08:25:52 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:40.104 ************************************ 00:12:40.104 END TEST accel_dif_verify 00:12:40.104 ************************************ 00:12:40.104 08:25:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:40.104 08:25:52 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:40.104 08:25:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:40.104 08:25:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.104 08:25:52 accel -- common/autotest_common.sh@10 -- # set +x 00:12:40.104 ************************************ 00:12:40.104 START TEST accel_dif_generate 00:12:40.104 ************************************ 00:12:40.104 08:25:52 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:40.104 08:25:52 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:40.104 [2024-07-23 08:25:52.212527] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:40.104 [2024-07-23 08:25:52.212826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218927 ] 00:12:40.104 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.104 [2024-07-23 08:25:52.520848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.672 [2024-07-23 08:25:53.003915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:41.240 08:25:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:44.532 08:25:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:44.532 00:12:44.532 real 0m4.267s 00:12:44.532 user 0m3.795s 00:12:44.532 sys 0m0.469s 00:12:44.532 08:25:56 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.532 08:25:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:44.532 ************************************ 00:12:44.532 END TEST accel_dif_generate 00:12:44.532 ************************************ 00:12:44.532 08:25:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:44.532 08:25:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:44.532 08:25:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:44.532 08:25:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.532 08:25:56 accel -- common/autotest_common.sh@10 -- # set +x 00:12:44.532 ************************************ 00:12:44.532 START TEST accel_dif_generate_copy 00:12:44.532 ************************************ 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:44.532 08:25:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:44.532 [2024-07-23 08:25:56.550950] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:44.532 [2024-07-23 08:25:56.551261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219440 ] 00:12:44.532 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.532 [2024-07-23 08:25:56.854928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.102 [2024-07-23 08:25:57.360443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:45.361 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:45.362 08:25:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:48.653 00:12:48.653 real 0m4.298s 00:12:48.653 user 0m0.032s 00:12:48.653 sys 0m0.004s 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.653 08:26:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:48.653 ************************************ 00:12:48.653 END TEST accel_dif_generate_copy 00:12:48.653 ************************************ 00:12:48.653 08:26:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:48.653 08:26:00 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:48.653 08:26:00 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:48.653 08:26:00 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:48.653 08:26:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.653 08:26:00 accel -- common/autotest_common.sh@10 -- # set +x 00:12:48.653 ************************************ 00:12:48.653 START TEST accel_comp 00:12:48.653 ************************************ 00:12:48.653 08:26:00 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:48.653 08:26:00 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:48.653 [2024-07-23 08:26:00.941749] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:48.653 [2024-07-23 08:26:00.942049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219892 ] 00:12:48.653 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.917 [2024-07-23 08:26:01.237483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.486 [2024-07-23 08:26:01.725785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:49.755 08:26:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:53.043 08:26:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.043 00:12:53.043 real 0m4.275s 00:12:53.043 user 0m0.038s 00:12:53.043 sys 0m0.001s 00:12:53.043 08:26:05 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.043 08:26:05 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:53.043 ************************************ 00:12:53.043 END TEST accel_comp 00:12:53.043 ************************************ 00:12:53.043 08:26:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:53.043 08:26:05 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:53.043 08:26:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:53.043 08:26:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.044 08:26:05 accel -- common/autotest_common.sh@10 -- # set +x 00:12:53.044 ************************************ 00:12:53.044 START TEST accel_decomp 00:12:53.044 ************************************ 00:12:53.044 08:26:05 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:53.044 08:26:05 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:53.044 [2024-07-23 08:26:05.301563] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:53.044 [2024-07-23 08:26:05.301873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220440 ] 00:12:53.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.304 [2024-07-23 08:26:05.605133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.874 [2024-07-23 08:26:06.086913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.133 08:26:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:57.470 08:26:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:57.470 00:12:57.470 real 0m4.258s 00:12:57.470 user 0m0.036s 00:12:57.470 sys 0m0.003s 00:12:57.470 08:26:09 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.470 08:26:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:57.470 ************************************ 00:12:57.470 END TEST accel_decomp 00:12:57.470 ************************************ 00:12:57.470 08:26:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:57.470 08:26:09 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:57.470 08:26:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:57.470 08:26:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.470 08:26:09 accel -- common/autotest_common.sh@10 -- # set +x 00:12:57.470 ************************************ 00:12:57.470 START TEST accel_decomp_full 00:12:57.470 ************************************ 00:12:57.470 08:26:09 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:57.470 08:26:09 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:57.470 [2024-07-23 08:26:09.639489] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:57.470 [2024-07-23 08:26:09.639800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220862 ] 00:12:57.470 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.470 [2024-07-23 08:26:09.925102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.041 [2024-07-23 08:26:10.409887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.611 08:26:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:01.906 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:01.907 08:26:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:01.907 00:13:01.907 real 0m4.279s 00:13:01.907 user 0m3.814s 00:13:01.907 sys 0m0.459s 00:13:01.907 08:26:13 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.907 08:26:13 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:13:01.907 ************************************ 00:13:01.907 END TEST accel_decomp_full 00:13:01.907 ************************************ 00:13:01.907 08:26:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:01.907 08:26:13 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:01.907 08:26:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:01.907 08:26:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.907 08:26:13 accel -- common/autotest_common.sh@10 -- # set +x 00:13:01.907 ************************************ 00:13:01.907 START TEST accel_decomp_mcore 00:13:01.907 ************************************ 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:01.907 08:26:13 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:01.907 [2024-07-23 08:26:13.965641] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:01.907 [2024-07-23 08:26:13.965823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221403 ] 00:13:01.907 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.907 [2024-07-23 08:26:14.237181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.475 [2024-07-23 08:26:14.730465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.475 [2024-07-23 08:26:14.730538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.475 [2024-07-23 08:26:14.730599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.475 [2024-07-23 08:26:14.730613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.733 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.734 08:26:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:05.265 00:13:05.265 real 0m3.669s 00:13:05.265 user 0m0.026s 00:13:05.265 sys 0m0.006s 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.265 08:26:17 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:05.265 ************************************ 00:13:05.265 END TEST accel_decomp_mcore 00:13:05.265 ************************************ 00:13:05.265 08:26:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:05.265 08:26:17 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.265 08:26:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:05.265 08:26:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.265 08:26:17 accel -- common/autotest_common.sh@10 -- # set +x 00:13:05.265 ************************************ 00:13:05.265 START TEST accel_decomp_full_mcore 00:13:05.265 ************************************ 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:05.265 08:26:17 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:05.265 [2024-07-23 08:26:17.737125] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:05.265 [2024-07-23 08:26:17.737283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221823 ] 00:13:05.525 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.525 [2024-07-23 08:26:17.945593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.093 [2024-07-23 08:26:18.443226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.093 [2024-07-23 08:26:18.443283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.093 [2024-07-23 08:26:18.443344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.093 [2024-07-23 08:26:18.443352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:06.353 08:26:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.885 00:13:08.885 real 0m3.705s 00:13:08.885 user 0m0.025s 00:13:08.885 sys 0m0.007s 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.885 08:26:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:08.885 ************************************ 00:13:08.885 END TEST accel_decomp_full_mcore 00:13:08.885 ************************************ 00:13:08.886 08:26:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:08.886 08:26:21 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:08.886 08:26:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:08.886 08:26:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.886 08:26:21 accel -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 ************************************ 00:13:09.146 START TEST accel_decomp_mthread 00:13:09.146 ************************************ 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:09.146 08:26:21 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:09.146 [2024-07-23 08:26:21.532140] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:09.146 [2024-07-23 08:26:21.532459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222246 ] 00:13:09.406 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.406 [2024-07-23 08:26:21.825613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.975 [2024-07-23 08:26:22.309169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.543 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:10.544 08:26:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.835 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:13.836 00:13:13.836 real 0m4.285s 00:13:13.836 user 0m0.033s 00:13:13.836 sys 0m0.007s 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.836 08:26:25 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:13.836 ************************************ 00:13:13.836 END TEST accel_decomp_mthread 00:13:13.836 ************************************ 00:13:13.836 08:26:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:13.836 08:26:25 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.836 08:26:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:13.836 08:26:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.836 08:26:25 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.836 ************************************ 00:13:13.836 START TEST accel_decomp_full_mthread 00:13:13.836 ************************************ 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:13.836 08:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:13.836 [2024-07-23 08:26:25.861352] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:13.836 [2024-07-23 08:26:25.861535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222789 ] 00:13:13.836 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.836 [2024-07-23 08:26:26.154449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.405 [2024-07-23 08:26:26.633882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:14.665 08:26:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:18.011 00:13:18.011 real 0m4.332s 00:13:18.011 user 0m3.896s 00:13:18.011 sys 0m0.434s 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.011 08:26:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:18.011 ************************************ 00:13:18.011 END TEST accel_decomp_full_mthread 00:13:18.011 ************************************ 00:13:18.011 08:26:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:18.011 08:26:30 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:18.011 08:26:30 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:18.011 08:26:30 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:18.011 08:26:30 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:18.011 08:26:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:18.011 08:26:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.011 08:26:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:18.011 08:26:30 accel -- common/autotest_common.sh@10 -- # set +x 00:13:18.011 08:26:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:18.011 08:26:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:18.011 08:26:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:18.011 08:26:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:18.011 08:26:30 accel -- accel/accel.sh@41 -- # jq -r . 00:13:18.011 ************************************ 00:13:18.011 START TEST accel_dif_functional_tests 00:13:18.011 ************************************ 00:13:18.011 08:26:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:18.011 [2024-07-23 08:26:30.375508] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:18.011 [2024-07-23 08:26:30.375782] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223212 ] 00:13:18.271 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.271 [2024-07-23 08:26:30.680499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.838 [2024-07-23 08:26:31.195801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.838 [2024-07-23 08:26:31.195854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.838 [2024-07-23 08:26:31.195864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.404 00:13:19.404 00:13:19.404 CUnit - A unit testing framework for C - Version 2.1-3 00:13:19.404 http://cunit.sourceforge.net/ 00:13:19.404 00:13:19.404 00:13:19.404 Suite: accel_dif 00:13:19.404 Test: verify: DIF generated, GUARD check ...passed 00:13:19.404 Test: verify: DIF generated, APPTAG check ...passed 00:13:19.404 Test: verify: DIF generated, REFTAG check ...passed 00:13:19.404 Test: verify: DIF not generated, GUARD check ...[2024-07-23 08:26:31.630408] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:19.404 passed 00:13:19.404 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 08:26:31.630563] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:19.404 passed 00:13:19.404 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 08:26:31.630656] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:19.404 passed 00:13:19.404 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:19.404 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 08:26:31.630828] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:19.404 passed 00:13:19.404 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:19.404 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:19.404 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:19.404 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 08:26:31.631152] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:19.404 passed 00:13:19.404 Test: verify copy: DIF generated, GUARD check ...passed 00:13:19.404 Test: verify copy: DIF generated, APPTAG check ...passed 00:13:19.404 Test: verify copy: DIF generated, REFTAG check ...passed 00:13:19.404 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 08:26:31.631553] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:19.404 passed 00:13:19.404 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 08:26:31.631660] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:19.404 passed 00:13:19.404 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 08:26:31.631763] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:19.404 passed 00:13:19.404 Test: generate copy: DIF generated, GUARD check ...passed 00:13:19.404 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:19.404 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:19.404 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:19.404 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:19.404 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:19.404 Test: generate copy: iovecs-len validate ...[2024-07-23 08:26:31.632381] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:19.404 passed 00:13:19.404 Test: generate copy: buffer alignment validate ...passed 00:13:19.404 00:13:19.404 Run Summary: Type Total Ran Passed Failed Inactive 00:13:19.404 suites 1 1 n/a 0 0 00:13:19.404 tests 26 26 26 0 0 00:13:19.404 asserts 115 115 115 0 n/a 00:13:19.404 00:13:19.404 Elapsed time = 0.008 seconds 00:13:21.309 00:13:21.309 real 0m3.331s 00:13:21.309 user 0m5.754s 00:13:21.309 sys 0m0.513s 00:13:21.309 08:26:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.309 08:26:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:21.309 ************************************ 00:13:21.309 END TEST accel_dif_functional_tests 00:13:21.309 ************************************ 00:13:21.309 08:26:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:21.309 00:13:21.309 real 1m44.218s 00:13:21.309 user 1m48.098s 00:13:21.309 sys 0m12.866s 00:13:21.309 08:26:33 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.309 08:26:33 accel -- common/autotest_common.sh@10 -- # set +x 00:13:21.309 ************************************ 00:13:21.309 END TEST accel 00:13:21.309 ************************************ 00:13:21.309 08:26:33 -- common/autotest_common.sh@1142 -- # return 0 00:13:21.309 08:26:33 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:21.309 08:26:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:21.309 08:26:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.309 08:26:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.309 ************************************ 00:13:21.309 START TEST accel_rpc 00:13:21.309 ************************************ 00:13:21.309 08:26:33 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:21.309 * Looking for test storage... 00:13:21.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:13:21.309 08:26:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:21.309 08:26:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2223673 00:13:21.309 08:26:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:21.309 08:26:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2223673 00:13:21.309 08:26:33 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2223673 ']' 00:13:21.309 08:26:33 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.309 08:26:33 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.309 08:26:33 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.309 08:26:33 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.309 08:26:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.567 [2024-07-23 08:26:33.971267] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:21.567 [2024-07-23 08:26:33.971610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223673 ] 00:13:21.826 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.826 [2024-07-23 08:26:34.266472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.394 [2024-07-23 08:26:34.749358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.962 08:26:35 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.962 08:26:35 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:22.962 08:26:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:22.962 08:26:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:22.962 08:26:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:22.962 08:26:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:22.962 08:26:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:22.962 08:26:35 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:22.962 08:26:35 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.962 08:26:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.962 ************************************ 00:13:22.962 START TEST accel_assign_opcode 00:13:22.962 ************************************ 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:22.962 [2024-07-23 08:26:35.457329] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:22.962 [2024-07-23 08:26:35.465281] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.962 08:26:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.126 software 00:13:25.126 00:13:25.126 real 0m1.869s 00:13:25.126 user 0m0.079s 00:13:25.126 sys 0m0.010s 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.126 08:26:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:25.126 ************************************ 00:13:25.126 END TEST accel_assign_opcode 00:13:25.126 ************************************ 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:25.126 08:26:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2223673 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2223673 ']' 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2223673 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2223673 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2223673' 00:13:25.126 killing process with pid 2223673 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@967 -- # kill 2223673 00:13:25.126 08:26:37 accel_rpc -- common/autotest_common.sh@972 -- # wait 2223673 00:13:30.401 00:13:30.401 real 0m8.734s 00:13:30.401 user 0m8.825s 00:13:30.401 sys 0m1.171s 00:13:30.401 08:26:42 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.401 08:26:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.401 ************************************ 00:13:30.401 END TEST accel_rpc 00:13:30.401 ************************************ 00:13:30.401 08:26:42 -- common/autotest_common.sh@1142 -- # return 0 00:13:30.401 08:26:42 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:13:30.401 08:26:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:30.401 08:26:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.401 08:26:42 -- common/autotest_common.sh@10 -- # set +x 00:13:30.401 ************************************ 00:13:30.401 START TEST app_cmdline 00:13:30.401 ************************************ 00:13:30.401 08:26:42 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:13:30.401 * Looking for test storage... 00:13:30.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:30.401 08:26:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:30.401 08:26:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2224771 00:13:30.401 08:26:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:30.401 08:26:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2224771 00:13:30.401 08:26:42 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2224771 ']' 00:13:30.401 08:26:42 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.401 08:26:42 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.401 08:26:42 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.401 08:26:42 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.401 08:26:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:30.401 [2024-07-23 08:26:42.593116] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:30.401 [2024-07-23 08:26:42.593292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224771 ] 00:13:30.401 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.401 [2024-07-23 08:26:42.841054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.971 [2024-07-23 08:26:43.343034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:13:33.510 { 00:13:33.510 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:13:33.510 "fields": { 00:13:33.510 "major": 24, 00:13:33.510 "minor": 9, 00:13:33.510 "patch": 0, 00:13:33.510 "suffix": "-pre", 00:13:33.510 "commit": "f7b31b2b9" 00:13:33.510 } 00:13:33.510 } 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:33.510 08:26:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.510 08:26:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:33.511 08:26:45 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.511 08:26:45 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:33.511 08:26:45 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:34.079 request: 00:13:34.079 { 00:13:34.079 "method": "env_dpdk_get_mem_stats", 00:13:34.079 "req_id": 1 00:13:34.079 } 00:13:34.079 Got JSON-RPC error response 00:13:34.079 response: 00:13:34.079 { 00:13:34.079 "code": -32601, 00:13:34.079 "message": "Method not found" 00:13:34.079 } 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:34.079 08:26:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2224771 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2224771 ']' 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2224771 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2224771 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2224771' 00:13:34.079 killing process with pid 2224771 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@967 -- # kill 2224771 00:13:34.079 08:26:46 app_cmdline -- common/autotest_common.sh@972 -- # wait 2224771 00:13:39.386 00:13:39.386 real 0m8.789s 00:13:39.386 user 0m9.561s 00:13:39.386 sys 0m1.134s 00:13:39.386 08:26:51 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.386 08:26:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:39.386 ************************************ 00:13:39.386 END TEST app_cmdline 00:13:39.386 ************************************ 00:13:39.386 08:26:51 -- common/autotest_common.sh@1142 -- # return 0 00:13:39.386 08:26:51 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:13:39.386 08:26:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:39.386 08:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.386 08:26:51 -- common/autotest_common.sh@10 -- # set +x 00:13:39.386 ************************************ 00:13:39.386 START TEST version 00:13:39.386 ************************************ 00:13:39.386 08:26:51 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:13:39.386 * Looking for test storage... 00:13:39.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:39.386 08:26:51 version -- app/version.sh@17 -- # get_header_version major 00:13:39.386 08:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:39.386 08:26:51 version -- app/version.sh@14 -- # cut -f2 00:13:39.386 08:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.386 08:26:51 version -- app/version.sh@17 -- # major=24 00:13:39.386 08:26:51 version -- app/version.sh@18 -- # get_header_version minor 00:13:39.387 08:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:39.387 08:26:51 version -- app/version.sh@14 -- # cut -f2 00:13:39.387 08:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.387 08:26:51 version -- app/version.sh@18 -- # minor=9 00:13:39.387 08:26:51 version -- app/version.sh@19 -- # get_header_version patch 00:13:39.387 08:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:39.387 08:26:51 version -- app/version.sh@14 -- # cut -f2 00:13:39.387 08:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.387 08:26:51 version -- app/version.sh@19 -- # patch=0 00:13:39.387 08:26:51 version -- app/version.sh@20 -- # get_header_version suffix 00:13:39.387 08:26:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:39.387 08:26:51 version -- app/version.sh@14 -- # cut -f2 00:13:39.387 08:26:51 version -- app/version.sh@14 -- # tr -d '"' 00:13:39.387 08:26:51 version -- app/version.sh@20 -- # suffix=-pre 00:13:39.387 08:26:51 version -- app/version.sh@22 -- # version=24.9 00:13:39.387 08:26:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:39.387 08:26:51 version -- app/version.sh@28 -- # version=24.9rc0 00:13:39.387 08:26:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:39.387 08:26:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:39.387 08:26:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:13:39.387 08:26:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:13:39.387 00:13:39.387 real 0m0.153s 00:13:39.387 user 0m0.083s 00:13:39.387 sys 0m0.105s 00:13:39.387 08:26:51 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.387 08:26:51 version -- common/autotest_common.sh@10 -- # set +x 00:13:39.387 ************************************ 00:13:39.387 END TEST version 00:13:39.387 ************************************ 00:13:39.387 08:26:51 -- common/autotest_common.sh@1142 -- # return 0 00:13:39.387 08:26:51 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@198 -- # uname -s 00:13:39.387 08:26:51 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:13:39.387 08:26:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:13:39.387 08:26:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:13:39.387 08:26:51 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:13:39.387 08:26:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.387 08:26:51 -- common/autotest_common.sh@10 -- # set +x 00:13:39.387 08:26:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:13:39.387 08:26:51 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:13:39.387 08:26:51 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:39.387 08:26:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:39.387 08:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.387 08:26:51 -- common/autotest_common.sh@10 -- # set +x 00:13:39.387 ************************************ 00:13:39.387 START TEST nvmf_tcp 00:13:39.387 ************************************ 00:13:39.387 08:26:51 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:39.387 * Looking for test storage... 00:13:39.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:39.387 08:26:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:13:39.387 08:26:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:39.387 08:26:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:13:39.387 08:26:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:39.387 08:26:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.387 08:26:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.387 ************************************ 00:13:39.387 START TEST nvmf_target_core 00:13:39.387 ************************************ 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:13:39.387 * Looking for test storage... 00:13:39.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:13:39.387 08:26:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:39.388 08:26:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:39.388 08:26:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.388 08:26:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:39.388 ************************************ 00:13:39.388 START TEST nvmf_abort 00:13:39.388 ************************************ 00:13:39.388 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:39.648 * Looking for test storage... 00:13:39.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.648 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.649 08:26:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.649 08:26:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:42.940 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.940 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:42.941 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:42.941 Found net devices under 0000:84:00.0: cvl_0_0 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:42.941 Found net devices under 0000:84:00.1: cvl_0_1 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:13:42.941 00:13:42.941 --- 10.0.0.2 ping statistics --- 00:13:42.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.941 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:13:42.941 00:13:42.941 --- 10.0.0.1 ping statistics --- 00:13:42.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.941 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2227671 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2227671 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2227671 ']' 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.941 08:26:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.200 [2024-07-23 08:26:55.503940] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:43.200 [2024-07-23 08:26:55.504114] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.200 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.200 [2024-07-23 08:26:55.678030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.767 [2024-07-23 08:26:56.002067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.768 [2024-07-23 08:26:56.002157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.768 [2024-07-23 08:26:56.002200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.768 [2024-07-23 08:26:56.002225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.768 [2024-07-23 08:26:56.002252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.768 [2024-07-23 08:26:56.002424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.768 [2024-07-23 08:26:56.002477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.768 [2024-07-23 08:26:56.002488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.703 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.703 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:44.703 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.703 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.703 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.703 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.704 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:44.704 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 [2024-07-23 08:26:56.996754] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 Malloc0 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 Delay0 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 [2024-07-23 08:26:57.151992] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:44.962 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.962 [2024-07-23 08:26:57.411320] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:47.492 Initializing NVMe Controllers 00:13:47.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:47.493 controller IO queue size 128 less than required 00:13:47.493 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:47.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:47.493 Initialization complete. Launching workers. 00:13:47.493 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 18481 00:13:47.493 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 18542, failed to submit 66 00:13:47.493 success 18481, unsuccess 61, failed 0 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.493 rmmod nvme_tcp 00:13:47.493 rmmod nvme_fabrics 00:13:47.493 rmmod nvme_keyring 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2227671 ']' 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2227671 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2227671 ']' 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2227671 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2227671 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2227671' 00:13:47.493 killing process with pid 2227671 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2227671 00:13:47.493 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2227671 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.395 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.304 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.305 00:13:51.305 real 0m11.730s 00:13:51.305 user 0m18.673s 00:13:51.305 sys 0m4.011s 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:51.305 ************************************ 00:13:51.305 END TEST nvmf_abort 00:13:51.305 ************************************ 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:51.305 ************************************ 00:13:51.305 START TEST nvmf_ns_hotplug_stress 00:13:51.305 ************************************ 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:51.305 * Looking for test storage... 00:13:51.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.305 08:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:54.598 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:54.598 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:54.598 Found net devices under 0000:84:00.0: cvl_0_0 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:54.598 Found net devices under 0000:84:00.1: cvl_0_1 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.598 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.598 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.598 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.598 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.598 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:13:54.859 00:13:54.859 --- 10.0.0.2 ping statistics --- 00:13:54.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.859 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:13:54.859 00:13:54.859 --- 10.0.0.1 ping statistics --- 00:13:54.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.859 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2230544 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2230544 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2230544 ']' 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.859 08:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.118 [2024-07-23 08:27:07.414937] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:55.118 [2024-07-23 08:27:07.415243] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.118 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.376 [2024-07-23 08:27:07.689669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.634 [2024-07-23 08:27:08.010663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.634 [2024-07-23 08:27:08.010752] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.634 [2024-07-23 08:27:08.010793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.634 [2024-07-23 08:27:08.010818] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.634 [2024-07-23 08:27:08.010843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.634 [2024-07-23 08:27:08.011011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.634 [2024-07-23 08:27:08.011103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.634 [2024-07-23 08:27:08.011126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:56.202 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.464 [2024-07-23 08:27:08.823158] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.464 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.032 08:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.598 [2024-07-23 08:27:09.834457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.598 08:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.855 08:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:58.790 Malloc0 00:13:58.790 08:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:59.049 Delay0 00:13:59.049 08:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.614 08:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:00.178 NULL1 00:14:00.178 08:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:00.744 08:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2231738 00:14:00.744 08:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:00.744 08:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:00.744 08:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.001 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.259 08:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.824 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:01.824 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:02.389 true 00:14:02.389 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:02.389 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.323 Read completed with error (sct=0, sc=11) 00:14:03.323 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.839 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:03.839 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:04.404 true 00:14:04.404 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:04.404 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.779 08:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.037 08:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:06.037 08:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:06.603 true 00:14:06.603 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:06.603 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.538 08:27:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.472 08:27:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:08.472 08:27:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:08.731 true 00:14:08.731 08:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:08.731 08:27:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.666 08:27:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.217 08:27:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:10.217 08:27:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:10.784 true 00:14:10.784 08:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:10.784 08:27:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.158 08:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.416 08:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:12.417 08:27:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:12.983 true 00:14:13.241 08:27:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:13.241 08:27:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.175 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.741 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:14.741 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:14.741 true 00:14:14.741 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:14.741 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.117 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.685 08:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:16.685 08:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:17.251 true 00:14:17.251 08:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:17.251 08:27:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.186 08:27:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.752 08:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:18.752 08:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:19.319 true 00:14:19.319 08:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:19.319 08:27:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.253 08:27:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.187 08:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:21.187 08:27:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:21.753 true 00:14:21.753 08:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:21.753 08:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.685 08:27:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.944 08:27:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:22.944 08:27:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:23.877 true 00:14:23.877 08:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:23.877 08:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.814 08:27:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.815 08:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:24.815 08:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:25.073 true 00:14:25.073 08:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:25.073 08:27:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.008 08:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.525 08:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:27.525 08:27:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:28.091 true 00:14:28.091 08:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:28.091 08:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.349 08:27:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:29.123 08:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:29.123 08:27:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:29.688 true 00:14:29.688 08:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:29.688 08:27:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.064 08:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.323 Initializing NVMe Controllers 00:14:31.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.323 Controller IO queue size 128, less than required. 00:14:31.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.323 Controller IO queue size 128, less than required. 00:14:31.323 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:31.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:31.323 Initialization complete. Launching workers. 00:14:31.323 ======================================================== 00:14:31.323 Latency(us) 00:14:31.323 Device Information : IOPS MiB/s Average min max 00:14:31.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1038.70 0.51 70167.04 3940.56 1031797.50 00:14:31.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6679.16 3.26 19078.59 6138.22 794330.59 00:14:31.323 ======================================================== 00:14:31.323 Total : 7717.86 3.77 25954.27 3940.56 1031797.50 00:14:31.323 00:14:31.581 08:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:31.581 08:27:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:32.147 true 00:14:32.147 08:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2231738 00:14:32.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2231738) - No such process 00:14:32.147 08:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2231738 00:14:32.147 08:27:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.713 08:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.646 08:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:33.646 08:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:33.646 08:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:33.646 08:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:33.646 08:27:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:33.904 null0 00:14:33.904 08:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:33.904 08:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:33.904 08:27:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:34.837 null1 00:14:34.837 08:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:34.837 08:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:34.837 08:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:35.095 null2 00:14:35.095 08:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:35.095 08:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:35.095 08:27:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:36.028 null3 00:14:36.028 08:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.028 08:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.028 08:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:36.028 null4 00:14:36.028 08:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.028 08:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.028 08:27:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:36.593 null5 00:14:36.851 08:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:36.851 08:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:36.851 08:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:37.416 null6 00:14:37.416 08:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.416 08:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.416 08:27:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:37.982 null7 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:37.982 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2235882 2235883 2235884 2235886 2235888 2235890 2235893 2235894 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:37.983 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:38.242 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:38.242 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:38.242 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:38.500 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:38.500 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.500 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:38.500 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:38.500 08:27:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:38.758 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.758 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.758 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:38.758 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:38.758 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:38.758 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.016 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.274 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:39.549 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:39.549 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.549 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:39.549 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:39.549 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.549 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.550 08:27:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:39.550 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.550 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.550 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:39.550 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:39.827 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:39.828 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:40.086 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.344 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:40.603 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:40.603 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.603 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.603 08:27:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:40.603 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:40.861 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:40.861 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:40.861 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:40.861 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:40.861 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.119 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:41.120 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:41.120 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.378 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:41.636 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:41.636 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.636 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.636 08:27:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:41.895 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:42.153 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.153 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:42.153 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:42.411 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:42.411 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.411 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.411 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:42.411 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.411 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.411 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:42.669 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.669 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.669 08:27:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:42.669 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:42.927 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.186 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:43.446 08:27:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:43.705 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:43.965 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:44.223 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:44.223 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.223 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.223 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:44.223 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:44.223 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.482 08:27:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:44.741 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:44.741 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:44.741 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:44.741 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.000 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.259 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.518 08:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:45.518 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.518 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.518 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:45.777 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:45.777 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:45.777 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:45.777 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:45.777 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:45.777 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:46.036 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.295 rmmod nvme_tcp 00:14:46.295 rmmod nvme_fabrics 00:14:46.295 rmmod nvme_keyring 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2230544 ']' 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2230544 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2230544 ']' 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2230544 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2230544 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2230544' 00:14:46.295 killing process with pid 2230544 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2230544 00:14:46.295 08:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2230544 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.200 08:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.106 00:14:50.106 real 0m58.908s 00:14:50.106 user 4m28.986s 00:14:50.106 sys 0m20.890s 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.106 ************************************ 00:14:50.106 END TEST nvmf_ns_hotplug_stress 00:14:50.106 ************************************ 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.106 08:28:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:50.367 ************************************ 00:14:50.367 START TEST nvmf_delete_subsystem 00:14:50.367 ************************************ 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:50.367 * Looking for test storage... 00:14:50.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.367 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.368 08:28:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:53.665 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:53.665 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:53.665 Found net devices under 0000:84:00.0: cvl_0_0 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:53.665 Found net devices under 0000:84:00.1: cvl_0_1 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.665 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:53.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:14:53.666 00:14:53.666 --- 10.0.0.2 ping statistics --- 00:14:53.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.666 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:14:53.666 00:14:53.666 --- 10.0.0.1 ping statistics --- 00:14:53.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.666 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:53.666 08:28:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2239216 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2239216 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2239216 ']' 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.666 08:28:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:53.925 [2024-07-23 08:28:06.203783] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:53.925 [2024-07-23 08:28:06.204085] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.925 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.185 [2024-07-23 08:28:06.456748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:54.443 [2024-07-23 08:28:06.861597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.443 [2024-07-23 08:28:06.861702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.443 [2024-07-23 08:28:06.861759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.443 [2024-07-23 08:28:06.861795] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.443 [2024-07-23 08:28:06.861830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.443 [2024-07-23 08:28:06.862003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.443 [2024-07-23 08:28:06.862014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.042 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.042 [2024-07-23 08:28:07.458344] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 [2024-07-23 08:28:07.476954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 NULL1 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 Delay0 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2239461 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:55.043 08:28:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:55.303 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.303 [2024-07-23 08:28:07.621421] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:57.204 08:28:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.204 08:28:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.204 08:28:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 [2024-07-23 08:28:09.737116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(5) to be set 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Write completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.462 starting I/O failed: -6 00:14:57.462 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 [2024-07-23 08:28:09.739841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(5) to be set 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 [2024-07-23 08:28:09.740651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Write completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 Read completed with error (sct=0, sc=8) 00:14:57.463 [2024-07-23 08:28:09.741836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:14:58.397 [2024-07-23 08:28:10.682692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Read completed with error (sct=0, sc=8) 00:14:58.397 Write completed with error (sct=0, sc=8) 00:14:58.397 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 [2024-07-23 08:28:10.743134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 [2024-07-23 08:28:10.744263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 [2024-07-23 08:28:10.746124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Write completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 08:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.398 08:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:58.398 08:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2239461 00:14:58.398 08:28:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 Read completed with error (sct=0, sc=8) 00:14:58.398 [2024-07-23 08:28:10.750950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:14:58.398 Initializing NVMe Controllers 00:14:58.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:58.398 Controller IO queue size 128, less than required. 00:14:58.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:58.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:58.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:58.398 Initialization complete. Launching workers. 00:14:58.398 ======================================================== 00:14:58.398 Latency(us) 00:14:58.398 Device Information : IOPS MiB/s Average min max 00:14:58.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.00 0.08 922316.28 1693.03 1050074.83 00:14:58.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.07 0.08 936544.89 3590.13 1019521.58 00:14:58.398 ======================================================== 00:14:58.398 Total : 314.07 0.15 929296.35 1693.03 1050074.83 00:14:58.398 00:14:58.398 [2024-07-23 08:28:10.752916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:14:58.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2239461 00:14:58.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2239461) - No such process 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2239461 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2239461 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2239461 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:58.964 [2024-07-23 08:28:11.271392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.964 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:58.965 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.965 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2239863 00:14:58.965 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:58.965 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:14:58.965 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:58.965 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:58.965 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.223 [2024-07-23 08:28:11.487625] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:59.480 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:59.480 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:14:59.480 08:28:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.045 08:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:00.045 08:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:15:00.045 08:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.303 08:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:00.303 08:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:15:00.303 08:28:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.873 08:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:00.873 08:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:15:00.873 08:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.440 08:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.440 08:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:15:01.440 08:28:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.008 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.008 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:15:02.008 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.267 Initializing NVMe Controllers 00:15:02.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.267 Controller IO queue size 128, less than required. 00:15:02.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:02.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:02.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:02.267 Initialization complete. Launching workers. 00:15:02.267 ======================================================== 00:15:02.267 Latency(us) 00:15:02.267 Device Information : IOPS MiB/s Average min max 00:15:02.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006582.64 1000390.31 1017584.25 00:15:02.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007921.55 1000433.04 1049069.54 00:15:02.267 ======================================================== 00:15:02.267 Total : 256.00 0.12 1007252.10 1000390.31 1049069.54 00:15:02.267 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2239863 00:15:02.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2239863) - No such process 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2239863 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.525 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.526 rmmod nvme_tcp 00:15:02.526 rmmod nvme_fabrics 00:15:02.526 rmmod nvme_keyring 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2239216 ']' 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2239216 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2239216 ']' 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2239216 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2239216 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2239216' 00:15:02.526 killing process with pid 2239216 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2239216 00:15:02.526 08:28:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2239216 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.062 08:28:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.968 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.968 00:15:06.968 real 0m16.487s 00:15:06.968 user 0m32.554s 00:15:06.968 sys 0m4.494s 00:15:06.968 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.968 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:06.968 ************************************ 00:15:06.968 END TEST nvmf_delete_subsystem 00:15:06.969 ************************************ 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:06.969 ************************************ 00:15:06.969 START TEST nvmf_host_management 00:15:06.969 ************************************ 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:06.969 * Looking for test storage... 00:15:06.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.969 08:28:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:10.262 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:10.263 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:10.263 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:10.263 Found net devices under 0000:84:00.0: cvl_0_0 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:10.263 Found net devices under 0000:84:00.1: cvl_0_1 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:10.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:15:10.263 00:15:10.263 --- 10.0.0.2 ping statistics --- 00:15:10.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.263 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:15:10.263 00:15:10.263 --- 10.0.0.1 ping statistics --- 00:15:10.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.263 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:10.263 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2242488 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2242488 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2242488 ']' 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.522 08:28:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:10.522 [2024-07-23 08:28:22.944377] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:10.522 [2024-07-23 08:28:22.944565] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.781 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.781 [2024-07-23 08:28:23.138043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.038 [2024-07-23 08:28:23.459907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.038 [2024-07-23 08:28:23.459988] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.038 [2024-07-23 08:28:23.460022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.038 [2024-07-23 08:28:23.460047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.038 [2024-07-23 08:28:23.460072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.038 [2024-07-23 08:28:23.460279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.038 [2024-07-23 08:28:23.460340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.038 [2024-07-23 08:28:23.460402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.038 [2024-07-23 08:28:23.460417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:11.603 08:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.603 08:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:11.603 08:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.603 08:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.603 08:28:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:11.603 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.603 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.603 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.603 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:11.603 [2024-07-23 08:28:24.022134] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.603 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.604 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:11.895 Malloc0 00:15:11.895 [2024-07-23 08:28:24.166549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2242775 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2242775 /var/tmp/bdevperf.sock 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2242775 ']' 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:11.895 { 00:15:11.895 "params": { 00:15:11.895 "name": "Nvme$subsystem", 00:15:11.895 "trtype": "$TEST_TRANSPORT", 00:15:11.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.895 "adrfam": "ipv4", 00:15:11.895 "trsvcid": "$NVMF_PORT", 00:15:11.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.895 "hdgst": ${hdgst:-false}, 00:15:11.895 "ddgst": ${ddgst:-false} 00:15:11.895 }, 00:15:11.895 "method": "bdev_nvme_attach_controller" 00:15:11.895 } 00:15:11.895 EOF 00:15:11.895 )") 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:11.895 08:28:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:11.895 "params": { 00:15:11.895 "name": "Nvme0", 00:15:11.895 "trtype": "tcp", 00:15:11.895 "traddr": "10.0.0.2", 00:15:11.896 "adrfam": "ipv4", 00:15:11.896 "trsvcid": "4420", 00:15:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:11.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:11.896 "hdgst": false, 00:15:11.896 "ddgst": false 00:15:11.896 }, 00:15:11.896 "method": "bdev_nvme_attach_controller" 00:15:11.896 }' 00:15:11.896 [2024-07-23 08:28:24.337019] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:11.896 [2024-07-23 08:28:24.337369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242775 ] 00:15:12.154 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.154 [2024-07-23 08:28:24.589451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.412 [2024-07-23 08:28:24.902492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.980 Running I/O for 10 seconds... 00:15:13.547 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.547 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:13.547 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:13.547 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.548 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.548 [2024-07-23 08:28:25.989570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.989698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.989774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.989812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.989849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.989880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.989914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.989944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.989988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.990954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.990984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.548 [2024-07-23 08:28:25.991458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.548 [2024-07-23 08:28:25.991486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.991943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.991971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.992967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.992999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.549 [2024-07-23 08:28:25.993712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.549 [2024-07-23 08:28:25.993742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7c80 is same with the state(5) to be set 00:15:13.549 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.549 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:13.550 [2024-07-23 08:28:25.994142] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f7c80 was disconnected and freed. reset controller. 00:15:13.550 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.550 [2024-07-23 08:28:25.994293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.550 [2024-07-23 08:28:25.994349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.550 [2024-07-23 08:28:25.994384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns 08:28:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 id:0 cdw10:00000000 cdw11:00000000 00:15:13.550 [2024-07-23 08:28:25.994416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.550 [2024-07-23 08:28:25.994444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.550 [2024-07-23 08:28:25.994471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.550 [2024-07-23 08:28:25.994498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.550 [2024-07-23 08:28:25.994542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.550 [2024-07-23 08:28:25.994569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:15:13.550 [2024-07-23 08:28:25.996275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:13.550 task offset: 73472 on job bdev=Nvme0n1 fails 00:15:13.550 00:15:13.550 Latency(us) 00:15:13.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.550 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:13.550 Job: Nvme0n1 ended in about 0.56 seconds with error 00:15:13.550 Verification LBA range: start 0x0 length 0x400 00:15:13.550 Nvme0n1 : 0.56 915.90 57.24 114.49 0.00 60258.82 5825.42 55535.69 00:15:13.550 =================================================================================================================== 00:15:13.550 Total : 915.90 57.24 114.49 0.00 60258.82 5825.42 55535.69 00:15:13.550 08:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.550 08:28:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:13.550 [2024-07-23 08:28:26.003296] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:13.550 [2024-07-23 08:28:26.003368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:15:13.550 [2024-07-23 08:28:26.059985] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:14.485 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2242775 00:15:14.485 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:14.743 { 00:15:14.743 "params": { 00:15:14.743 "name": "Nvme$subsystem", 00:15:14.743 "trtype": "$TEST_TRANSPORT", 00:15:14.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.743 "adrfam": "ipv4", 00:15:14.743 "trsvcid": "$NVMF_PORT", 00:15:14.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.743 "hdgst": ${hdgst:-false}, 00:15:14.743 "ddgst": ${ddgst:-false} 00:15:14.743 }, 00:15:14.743 "method": "bdev_nvme_attach_controller" 00:15:14.743 } 00:15:14.743 EOF 00:15:14.743 )") 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:14.743 08:28:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:14.743 "params": { 00:15:14.743 "name": "Nvme0", 00:15:14.743 "trtype": "tcp", 00:15:14.743 "traddr": "10.0.0.2", 00:15:14.743 "adrfam": "ipv4", 00:15:14.743 "trsvcid": "4420", 00:15:14.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:14.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:14.743 "hdgst": false, 00:15:14.743 "ddgst": false 00:15:14.743 }, 00:15:14.743 "method": "bdev_nvme_attach_controller" 00:15:14.743 }' 00:15:14.743 [2024-07-23 08:28:27.170817] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:14.743 [2024-07-23 08:28:27.171140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243067 ] 00:15:15.002 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.002 [2024-07-23 08:28:27.432426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.261 [2024-07-23 08:28:27.745323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.194 Running I/O for 1 seconds... 00:15:17.128 00:15:17.128 Latency(us) 00:15:17.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.128 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:17.128 Verification LBA range: start 0x0 length 0x400 00:15:17.128 Nvme0n1 : 1.03 993.88 62.12 0.00 0.00 63085.49 12136.30 54758.97 00:15:17.128 =================================================================================================================== 00:15:17.128 Total : 993.88 62.12 0.00 0.00 63085.49 12136.30 54758.97 00:15:18.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2242775 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.502 rmmod nvme_tcp 00:15:18.502 rmmod nvme_fabrics 00:15:18.502 rmmod nvme_keyring 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2242488 ']' 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2242488 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2242488 ']' 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2242488 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242488 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242488' 00:15:18.502 killing process with pid 2242488 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2242488 00:15:18.502 08:28:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2242488 00:15:20.406 [2024-07-23 08:28:32.711730] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.406 08:28:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:22.946 00:15:22.946 real 0m15.650s 00:15:22.946 user 0m42.606s 00:15:22.946 sys 0m4.821s 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:22.946 ************************************ 00:15:22.946 END TEST nvmf_host_management 00:15:22.946 ************************************ 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:22.946 ************************************ 00:15:22.946 START TEST nvmf_lvol 00:15:22.946 ************************************ 00:15:22.946 08:28:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:22.946 * Looking for test storage... 00:15:22.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.946 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.946 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:22.946 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.947 08:28:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:26.239 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:26.239 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:26.239 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:26.240 Found net devices under 0000:84:00.0: cvl_0_0 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:26.240 Found net devices under 0000:84:00.1: cvl_0_1 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:15:26.240 00:15:26.240 --- 10.0.0.2 ping statistics --- 00:15:26.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.240 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:15:26.240 00:15:26.240 --- 10.0.0.1 ping statistics --- 00:15:26.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.240 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2245682 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2245682 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2245682 ']' 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.240 08:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 [2024-07-23 08:28:38.445957] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:26.240 [2024-07-23 08:28:38.446125] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.240 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.240 [2024-07-23 08:28:38.619987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.808 [2024-07-23 08:28:39.086286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.808 [2024-07-23 08:28:39.086433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.808 [2024-07-23 08:28:39.086478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.808 [2024-07-23 08:28:39.086505] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.808 [2024-07-23 08:28:39.086531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.808 [2024-07-23 08:28:39.086741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.808 [2024-07-23 08:28:39.086795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.808 [2024-07-23 08:28:39.086807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.066 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.066 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:27.066 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.066 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.066 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:27.324 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.324 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:27.582 [2024-07-23 08:28:39.866098] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.582 08:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:28.148 08:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:28.148 08:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:29.109 08:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:29.109 08:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:29.674 08:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:30.242 08:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c08044d4-3179-4abf-8b29-0ec55230cc7e 00:15:30.242 08:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c08044d4-3179-4abf-8b29-0ec55230cc7e lvol 20 00:15:30.808 08:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e5f6e0b9-8254-4ce4-99da-e5de151157de 00:15:30.808 08:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:31.066 08:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5f6e0b9-8254-4ce4-99da-e5de151157de 00:15:31.633 08:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:32.201 [2024-07-23 08:28:44.602382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.201 08:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.767 08:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2246505 00:15:32.767 08:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:32.767 08:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:33.025 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.959 08:28:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e5f6e0b9-8254-4ce4-99da-e5de151157de MY_SNAPSHOT 00:15:34.526 08:28:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a4883c5f-97da-47d8-95a2-5bb44e88e374 00:15:34.526 08:28:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e5f6e0b9-8254-4ce4-99da-e5de151157de 30 00:15:35.093 08:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a4883c5f-97da-47d8-95a2-5bb44e88e374 MY_CLONE 00:15:35.659 08:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f9d79c64-061b-4582-8127-6a947fb40197 00:15:35.659 08:28:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f9d79c64-061b-4582-8127-6a947fb40197 00:15:37.034 08:28:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2246505 00:15:43.597 Initializing NVMe Controllers 00:15:43.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:43.597 Controller IO queue size 128, less than required. 00:15:43.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:43.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:43.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:43.597 Initialization complete. Launching workers. 00:15:43.597 ======================================================== 00:15:43.597 Latency(us) 00:15:43.597 Device Information : IOPS MiB/s Average min max 00:15:43.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6201.00 24.22 20658.74 451.95 249353.60 00:15:43.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6048.70 23.63 21164.78 4396.85 259896.36 00:15:43.597 ======================================================== 00:15:43.597 Total : 12249.70 47.85 20908.61 451.95 259896.36 00:15:43.597 00:15:43.597 08:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:43.856 08:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e5f6e0b9-8254-4ce4-99da-e5de151157de 00:15:44.423 08:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c08044d4-3179-4abf-8b29-0ec55230cc7e 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.682 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.682 rmmod nvme_tcp 00:15:44.682 rmmod nvme_fabrics 00:15:44.941 rmmod nvme_keyring 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2245682 ']' 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2245682 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2245682 ']' 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2245682 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2245682 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2245682' 00:15:44.941 killing process with pid 2245682 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2245682 00:15:44.941 08:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2245682 00:15:47.476 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.477 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.477 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.477 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.477 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.477 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.477 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.477 08:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.042 08:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.042 00:15:50.042 real 0m27.033s 00:15:50.042 user 1m26.476s 00:15:50.042 sys 0m7.241s 00:15:50.042 08:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.042 08:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:50.042 ************************************ 00:15:50.042 END TEST nvmf_lvol 00:15:50.042 ************************************ 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:50.042 ************************************ 00:15:50.042 START TEST nvmf_lvs_grow 00:15:50.042 ************************************ 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:50.042 * Looking for test storage... 00:15:50.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.042 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.043 08:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:53.339 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:53.339 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:53.339 Found net devices under 0000:84:00.0: cvl_0_0 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.339 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:53.340 Found net devices under 0000:84:00.1: cvl_0_1 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:53.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:15:53.340 00:15:53.340 --- 10.0.0.2 ping statistics --- 00:15:53.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.340 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:15:53.340 00:15:53.340 --- 10.0.0.1 ping statistics --- 00:15:53.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.340 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2250173 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2250173 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2250173 ']' 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.340 08:29:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:53.340 [2024-07-23 08:29:05.810946] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:53.340 [2024-07-23 08:29:05.811246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.598 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.857 [2024-07-23 08:29:06.130377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.116 [2024-07-23 08:29:06.614456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.116 [2024-07-23 08:29:06.614536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.117 [2024-07-23 08:29:06.614571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.117 [2024-07-23 08:29:06.614628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.117 [2024-07-23 08:29:06.614676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.117 [2024-07-23 08:29:06.614787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.053 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.053 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:55.053 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.053 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.053 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:55.053 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.053 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:55.620 [2024-07-23 08:29:07.952424] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.620 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:55.620 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:55.620 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.620 08:29:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:55.620 ************************************ 00:15:55.620 START TEST lvs_grow_clean 00:15:55.620 ************************************ 00:15:55.620 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:55.620 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:55.621 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:56.189 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:56.189 08:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:57.126 08:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fb193dbc-0027-46b8-94ad-92362dfb3aab 00:15:57.126 08:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:15:57.126 08:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:57.385 08:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:57.385 08:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:57.385 08:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fb193dbc-0027-46b8-94ad-92362dfb3aab lvol 150 00:15:57.953 08:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c033d0fe-7469-4f23-9c56-952f9e43b9f9 00:15:57.953 08:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:58.212 08:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:58.471 [2024-07-23 08:29:10.755923] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:58.471 [2024-07-23 08:29:10.756152] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:58.471 true 00:15:58.471 08:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:15:58.471 08:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:59.041 08:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:59.041 08:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:59.611 08:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c033d0fe-7469-4f23-9c56-952f9e43b9f9 00:16:00.178 08:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:00.746 [2024-07-23 08:29:13.016609] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.746 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2251134 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2251134 /var/tmp/bdevperf.sock 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2251134 ']' 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.313 08:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:01.313 [2024-07-23 08:29:13.784051] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:01.313 [2024-07-23 08:29:13.784292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251134 ] 00:16:01.573 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.573 [2024-07-23 08:29:13.989578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.832 [2024-07-23 08:29:14.302925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.397 08:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.397 08:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:02.397 08:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:03.331 Nvme0n1 00:16:03.331 08:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:03.924 [ 00:16:03.924 { 00:16:03.924 "name": "Nvme0n1", 00:16:03.924 "aliases": [ 00:16:03.924 "c033d0fe-7469-4f23-9c56-952f9e43b9f9" 00:16:03.924 ], 00:16:03.924 "product_name": "NVMe disk", 00:16:03.924 "block_size": 4096, 00:16:03.924 "num_blocks": 38912, 00:16:03.924 "uuid": "c033d0fe-7469-4f23-9c56-952f9e43b9f9", 00:16:03.924 "assigned_rate_limits": { 00:16:03.924 "rw_ios_per_sec": 0, 00:16:03.924 "rw_mbytes_per_sec": 0, 00:16:03.924 "r_mbytes_per_sec": 0, 00:16:03.924 "w_mbytes_per_sec": 0 00:16:03.924 }, 00:16:03.924 "claimed": false, 00:16:03.924 "zoned": false, 00:16:03.924 "supported_io_types": { 00:16:03.924 "read": true, 00:16:03.924 "write": true, 00:16:03.924 "unmap": true, 00:16:03.924 "flush": true, 00:16:03.924 "reset": true, 00:16:03.924 "nvme_admin": true, 00:16:03.924 "nvme_io": true, 00:16:03.924 "nvme_io_md": false, 00:16:03.924 "write_zeroes": true, 00:16:03.924 "zcopy": false, 00:16:03.924 "get_zone_info": false, 00:16:03.924 "zone_management": false, 00:16:03.924 "zone_append": false, 00:16:03.924 "compare": true, 00:16:03.924 "compare_and_write": true, 00:16:03.924 "abort": true, 00:16:03.924 "seek_hole": false, 00:16:03.924 "seek_data": false, 00:16:03.924 "copy": true, 00:16:03.924 "nvme_iov_md": false 00:16:03.924 }, 00:16:03.924 "memory_domains": [ 00:16:03.924 { 00:16:03.924 "dma_device_id": "system", 00:16:03.924 "dma_device_type": 1 00:16:03.924 } 00:16:03.924 ], 00:16:03.924 "driver_specific": { 00:16:03.924 "nvme": [ 00:16:03.924 { 00:16:03.924 "trid": { 00:16:03.924 "trtype": "TCP", 00:16:03.924 "adrfam": "IPv4", 00:16:03.924 "traddr": "10.0.0.2", 00:16:03.924 "trsvcid": "4420", 00:16:03.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:03.924 }, 00:16:03.924 "ctrlr_data": { 00:16:03.924 "cntlid": 1, 00:16:03.924 "vendor_id": "0x8086", 00:16:03.924 "model_number": "SPDK bdev Controller", 00:16:03.924 "serial_number": "SPDK0", 00:16:03.924 "firmware_revision": "24.09", 00:16:03.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.924 "oacs": { 00:16:03.924 "security": 0, 00:16:03.924 "format": 0, 00:16:03.924 "firmware": 0, 00:16:03.924 "ns_manage": 0 00:16:03.924 }, 00:16:03.924 "multi_ctrlr": true, 00:16:03.924 "ana_reporting": false 00:16:03.924 }, 00:16:03.924 "vs": { 00:16:03.924 "nvme_version": "1.3" 00:16:03.924 }, 00:16:03.924 "ns_data": { 00:16:03.924 "id": 1, 00:16:03.924 "can_share": true 00:16:03.924 } 00:16:03.924 } 00:16:03.924 ], 00:16:03.924 "mp_policy": "active_passive" 00:16:03.924 } 00:16:03.924 } 00:16:03.924 ] 00:16:03.924 08:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2251403 00:16:03.924 08:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:03.924 08:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:03.924 Running I/O for 10 seconds... 00:16:05.304 Latency(us) 00:16:05.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.304 Nvme0n1 : 1.00 8828.00 34.48 0.00 0.00 0.00 0.00 0.00 00:16:05.304 =================================================================================================================== 00:16:05.304 Total : 8828.00 34.48 0.00 0.00 0.00 0.00 0.00 00:16:05.304 00:16:05.870 08:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:06.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.129 Nvme0n1 : 2.00 8954.00 34.98 0.00 0.00 0.00 0.00 0.00 00:16:06.129 =================================================================================================================== 00:16:06.129 Total : 8954.00 34.98 0.00 0.00 0.00 0.00 0.00 00:16:06.129 00:16:06.387 true 00:16:06.387 08:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:06.387 08:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:06.955 08:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:06.955 08:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:06.955 08:29:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2251403 00:16:06.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.955 Nvme0n1 : 3.00 8975.00 35.06 0.00 0.00 0.00 0.00 0.00 00:16:06.955 =================================================================================================================== 00:16:06.955 Total : 8975.00 35.06 0.00 0.00 0.00 0.00 0.00 00:16:06.955 00:16:08.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.329 Nvme0n1 : 4.00 8985.50 35.10 0.00 0.00 0.00 0.00 0.00 00:16:08.329 =================================================================================================================== 00:16:08.329 Total : 8985.50 35.10 0.00 0.00 0.00 0.00 0.00 00:16:08.329 00:16:09.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.264 Nvme0n1 : 5.00 9017.20 35.22 0.00 0.00 0.00 0.00 0.00 00:16:09.264 =================================================================================================================== 00:16:09.264 Total : 9017.20 35.22 0.00 0.00 0.00 0.00 0.00 00:16:09.264 00:16:10.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.199 Nvme0n1 : 6.00 9038.33 35.31 0.00 0.00 0.00 0.00 0.00 00:16:10.199 =================================================================================================================== 00:16:10.199 Total : 9038.33 35.31 0.00 0.00 0.00 0.00 0.00 00:16:10.199 00:16:11.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.133 Nvme0n1 : 7.00 9053.43 35.36 0.00 0.00 0.00 0.00 0.00 00:16:11.133 =================================================================================================================== 00:16:11.133 Total : 9053.43 35.36 0.00 0.00 0.00 0.00 0.00 00:16:11.133 00:16:12.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:12.068 Nvme0n1 : 8.00 9080.62 35.47 0.00 0.00 0.00 0.00 0.00 00:16:12.068 =================================================================================================================== 00:16:12.068 Total : 9080.62 35.47 0.00 0.00 0.00 0.00 0.00 00:16:12.068 00:16:13.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.003 Nvme0n1 : 9.00 9087.67 35.50 0.00 0.00 0.00 0.00 0.00 00:16:13.003 =================================================================================================================== 00:16:13.003 Total : 9087.67 35.50 0.00 0.00 0.00 0.00 0.00 00:16:13.003 00:16:13.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.936 Nvme0n1 : 10.00 9093.30 35.52 0.00 0.00 0.00 0.00 0.00 00:16:13.936 =================================================================================================================== 00:16:13.936 Total : 9093.30 35.52 0.00 0.00 0.00 0.00 0.00 00:16:13.936 00:16:14.194 00:16:14.194 Latency(us) 00:16:14.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:14.194 Nvme0n1 : 10.01 9099.88 35.55 0.00 0.00 14057.30 3835.07 26991.12 00:16:14.194 =================================================================================================================== 00:16:14.194 Total : 9099.88 35.55 0.00 0.00 14057.30 3835.07 26991.12 00:16:14.194 0 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2251134 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2251134 ']' 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2251134 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2251134 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2251134' 00:16:14.194 killing process with pid 2251134 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2251134 00:16:14.194 Received shutdown signal, test time was about 10.000000 seconds 00:16:14.194 00:16:14.194 Latency(us) 00:16:14.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.194 =================================================================================================================== 00:16:14.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.194 08:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2251134 00:16:15.571 08:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:16.139 08:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:16.706 08:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:16.706 08:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:17.274 08:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:17.274 08:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:17.274 08:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:17.844 [2024-07-23 08:29:30.262647] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:17.844 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:18.104 request: 00:16:18.104 { 00:16:18.104 "uuid": "fb193dbc-0027-46b8-94ad-92362dfb3aab", 00:16:18.104 "method": "bdev_lvol_get_lvstores", 00:16:18.104 "req_id": 1 00:16:18.104 } 00:16:18.104 Got JSON-RPC error response 00:16:18.104 response: 00:16:18.104 { 00:16:18.104 "code": -19, 00:16:18.104 "message": "No such device" 00:16:18.104 } 00:16:18.104 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:18.104 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:18.104 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:18.104 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:18.104 08:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:19.044 aio_bdev 00:16:19.044 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c033d0fe-7469-4f23-9c56-952f9e43b9f9 00:16:19.044 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c033d0fe-7469-4f23-9c56-952f9e43b9f9 00:16:19.044 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:19.044 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:19.044 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:19.044 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:19.044 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:19.641 08:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c033d0fe-7469-4f23-9c56-952f9e43b9f9 -t 2000 00:16:19.900 [ 00:16:19.900 { 00:16:19.900 "name": "c033d0fe-7469-4f23-9c56-952f9e43b9f9", 00:16:19.900 "aliases": [ 00:16:19.900 "lvs/lvol" 00:16:19.900 ], 00:16:19.900 "product_name": "Logical Volume", 00:16:19.900 "block_size": 4096, 00:16:19.900 "num_blocks": 38912, 00:16:19.900 "uuid": "c033d0fe-7469-4f23-9c56-952f9e43b9f9", 00:16:19.900 "assigned_rate_limits": { 00:16:19.900 "rw_ios_per_sec": 0, 00:16:19.900 "rw_mbytes_per_sec": 0, 00:16:19.900 "r_mbytes_per_sec": 0, 00:16:19.900 "w_mbytes_per_sec": 0 00:16:19.900 }, 00:16:19.900 "claimed": false, 00:16:19.900 "zoned": false, 00:16:19.900 "supported_io_types": { 00:16:19.900 "read": true, 00:16:19.900 "write": true, 00:16:19.900 "unmap": true, 00:16:19.900 "flush": false, 00:16:19.900 "reset": true, 00:16:19.900 "nvme_admin": false, 00:16:19.900 "nvme_io": false, 00:16:19.900 "nvme_io_md": false, 00:16:19.900 "write_zeroes": true, 00:16:19.900 "zcopy": false, 00:16:19.900 "get_zone_info": false, 00:16:19.900 "zone_management": false, 00:16:19.900 "zone_append": false, 00:16:19.900 "compare": false, 00:16:19.900 "compare_and_write": false, 00:16:19.900 "abort": false, 00:16:19.900 "seek_hole": true, 00:16:19.900 "seek_data": true, 00:16:19.900 "copy": false, 00:16:19.900 "nvme_iov_md": false 00:16:19.900 }, 00:16:19.900 "driver_specific": { 00:16:19.900 "lvol": { 00:16:19.900 "lvol_store_uuid": "fb193dbc-0027-46b8-94ad-92362dfb3aab", 00:16:19.900 "base_bdev": "aio_bdev", 00:16:19.900 "thin_provision": false, 00:16:19.900 "num_allocated_clusters": 38, 00:16:19.900 "snapshot": false, 00:16:19.900 "clone": false, 00:16:19.900 "esnap_clone": false 00:16:19.900 } 00:16:19.900 } 00:16:19.900 } 00:16:19.900 ] 00:16:19.900 08:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:19.900 08:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:19.900 08:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:20.468 08:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:20.469 08:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:20.469 08:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:21.037 08:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:21.037 08:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c033d0fe-7469-4f23-9c56-952f9e43b9f9 00:16:21.606 08:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb193dbc-0027-46b8-94ad-92362dfb3aab 00:16:22.173 08:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:22.741 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:22.741 00:16:22.741 real 0m27.226s 00:16:22.741 user 0m27.147s 00:16:22.741 sys 0m3.191s 00:16:22.741 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.741 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:22.741 ************************************ 00:16:22.741 END TEST lvs_grow_clean 00:16:22.741 ************************************ 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:23.001 ************************************ 00:16:23.001 START TEST lvs_grow_dirty 00:16:23.001 ************************************ 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:23.001 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:23.261 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:23.261 08:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:23.831 08:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:23.831 08:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:23.831 08:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:24.399 08:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:24.400 08:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:24.400 08:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a lvol 150 00:16:24.969 08:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=67f10cd4-b8ac-455f-a458-d868349f2bc1 00:16:24.969 08:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:24.969 08:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:25.537 [2024-07-23 08:29:37.973589] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:25.537 [2024-07-23 08:29:37.973859] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:25.537 true 00:16:25.537 08:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:25.537 08:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:26.105 08:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:26.105 08:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:26.673 08:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67f10cd4-b8ac-455f-a458-d868349f2bc1 00:16:27.239 08:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:27.806 [2024-07-23 08:29:40.266434] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.806 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2254109 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2254109 /var/tmp/bdevperf.sock 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2254109 ']' 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.374 08:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:28.634 [2024-07-23 08:29:41.027559] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:28.634 [2024-07-23 08:29:41.027893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254109 ] 00:16:28.893 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.893 [2024-07-23 08:29:41.263099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.152 [2024-07-23 08:29:41.580598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.086 08:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.086 08:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:30.086 08:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:31.019 Nvme0n1 00:16:31.019 08:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:31.585 [ 00:16:31.585 { 00:16:31.585 "name": "Nvme0n1", 00:16:31.585 "aliases": [ 00:16:31.585 "67f10cd4-b8ac-455f-a458-d868349f2bc1" 00:16:31.585 ], 00:16:31.585 "product_name": "NVMe disk", 00:16:31.585 "block_size": 4096, 00:16:31.585 "num_blocks": 38912, 00:16:31.585 "uuid": "67f10cd4-b8ac-455f-a458-d868349f2bc1", 00:16:31.585 "assigned_rate_limits": { 00:16:31.585 "rw_ios_per_sec": 0, 00:16:31.585 "rw_mbytes_per_sec": 0, 00:16:31.585 "r_mbytes_per_sec": 0, 00:16:31.585 "w_mbytes_per_sec": 0 00:16:31.585 }, 00:16:31.585 "claimed": false, 00:16:31.585 "zoned": false, 00:16:31.585 "supported_io_types": { 00:16:31.585 "read": true, 00:16:31.585 "write": true, 00:16:31.585 "unmap": true, 00:16:31.585 "flush": true, 00:16:31.585 "reset": true, 00:16:31.585 "nvme_admin": true, 00:16:31.585 "nvme_io": true, 00:16:31.585 "nvme_io_md": false, 00:16:31.585 "write_zeroes": true, 00:16:31.585 "zcopy": false, 00:16:31.585 "get_zone_info": false, 00:16:31.585 "zone_management": false, 00:16:31.585 "zone_append": false, 00:16:31.585 "compare": true, 00:16:31.585 "compare_and_write": true, 00:16:31.585 "abort": true, 00:16:31.585 "seek_hole": false, 00:16:31.585 "seek_data": false, 00:16:31.585 "copy": true, 00:16:31.585 "nvme_iov_md": false 00:16:31.585 }, 00:16:31.585 "memory_domains": [ 00:16:31.585 { 00:16:31.585 "dma_device_id": "system", 00:16:31.585 "dma_device_type": 1 00:16:31.585 } 00:16:31.585 ], 00:16:31.585 "driver_specific": { 00:16:31.585 "nvme": [ 00:16:31.585 { 00:16:31.585 "trid": { 00:16:31.585 "trtype": "TCP", 00:16:31.585 "adrfam": "IPv4", 00:16:31.585 "traddr": "10.0.0.2", 00:16:31.585 "trsvcid": "4420", 00:16:31.585 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:31.585 }, 00:16:31.585 "ctrlr_data": { 00:16:31.585 "cntlid": 1, 00:16:31.585 "vendor_id": "0x8086", 00:16:31.585 "model_number": "SPDK bdev Controller", 00:16:31.585 "serial_number": "SPDK0", 00:16:31.585 "firmware_revision": "24.09", 00:16:31.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:31.585 "oacs": { 00:16:31.585 "security": 0, 00:16:31.585 "format": 0, 00:16:31.585 "firmware": 0, 00:16:31.585 "ns_manage": 0 00:16:31.585 }, 00:16:31.585 "multi_ctrlr": true, 00:16:31.585 "ana_reporting": false 00:16:31.585 }, 00:16:31.585 "vs": { 00:16:31.585 "nvme_version": "1.3" 00:16:31.585 }, 00:16:31.585 "ns_data": { 00:16:31.585 "id": 1, 00:16:31.585 "can_share": true 00:16:31.585 } 00:16:31.585 } 00:16:31.585 ], 00:16:31.585 "mp_policy": "active_passive" 00:16:31.585 } 00:16:31.585 } 00:16:31.585 ] 00:16:31.585 08:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2254500 00:16:31.585 08:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:31.585 08:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:31.843 Running I/O for 10 seconds... 00:16:32.777 Latency(us) 00:16:32.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.777 Nvme0n1 : 1.00 8764.00 34.23 0.00 0.00 0.00 0.00 0.00 00:16:32.777 =================================================================================================================== 00:16:32.777 Total : 8764.00 34.23 0.00 0.00 0.00 0.00 0.00 00:16:32.777 00:16:33.748 08:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:33.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.748 Nvme0n1 : 2.00 8827.00 34.48 0.00 0.00 0.00 0.00 0.00 00:16:33.748 =================================================================================================================== 00:16:33.748 Total : 8827.00 34.48 0.00 0.00 0.00 0.00 0.00 00:16:33.748 00:16:34.006 true 00:16:34.264 08:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:34.264 08:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:34.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.830 Nvme0n1 : 3.00 8890.33 34.73 0.00 0.00 0.00 0.00 0.00 00:16:34.830 =================================================================================================================== 00:16:34.830 Total : 8890.33 34.73 0.00 0.00 0.00 0.00 0.00 00:16:34.830 00:16:34.830 08:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:34.830 08:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:34.830 08:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2254500 00:16:35.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.764 Nvme0n1 : 4.00 8953.75 34.98 0.00 0.00 0.00 0.00 0.00 00:16:35.764 =================================================================================================================== 00:16:35.764 Total : 8953.75 34.98 0.00 0.00 0.00 0.00 0.00 00:16:35.764 00:16:36.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.698 Nvme0n1 : 5.00 8966.40 35.02 0.00 0.00 0.00 0.00 0.00 00:16:36.698 =================================================================================================================== 00:16:36.698 Total : 8966.40 35.02 0.00 0.00 0.00 0.00 0.00 00:16:36.698 00:16:38.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.073 Nvme0n1 : 6.00 8980.50 35.08 0.00 0.00 0.00 0.00 0.00 00:16:38.073 =================================================================================================================== 00:16:38.073 Total : 8980.50 35.08 0.00 0.00 0.00 0.00 0.00 00:16:38.073 00:16:39.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.007 Nvme0n1 : 7.00 9003.86 35.17 0.00 0.00 0.00 0.00 0.00 00:16:39.007 =================================================================================================================== 00:16:39.007 Total : 9003.86 35.17 0.00 0.00 0.00 0.00 0.00 00:16:39.007 00:16:39.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.941 Nvme0n1 : 8.00 9037.25 35.30 0.00 0.00 0.00 0.00 0.00 00:16:39.941 =================================================================================================================== 00:16:39.941 Total : 9037.25 35.30 0.00 0.00 0.00 0.00 0.00 00:16:39.941 00:16:40.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.877 Nvme0n1 : 9.00 9049.11 35.35 0.00 0.00 0.00 0.00 0.00 00:16:40.877 =================================================================================================================== 00:16:40.877 Total : 9049.11 35.35 0.00 0.00 0.00 0.00 0.00 00:16:40.877 00:16:41.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.811 Nvme0n1 : 10.00 9065.00 35.41 0.00 0.00 0.00 0.00 0.00 00:16:41.811 =================================================================================================================== 00:16:41.812 Total : 9065.00 35.41 0.00 0.00 0.00 0.00 0.00 00:16:41.812 00:16:41.812 00:16:41.812 Latency(us) 00:16:41.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.812 Nvme0n1 : 10.01 9066.09 35.41 0.00 0.00 14107.30 6310.87 27767.85 00:16:41.812 =================================================================================================================== 00:16:41.812 Total : 9066.09 35.41 0.00 0.00 14107.30 6310.87 27767.85 00:16:41.812 0 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2254109 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2254109 ']' 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2254109 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2254109 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2254109' 00:16:41.812 killing process with pid 2254109 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2254109 00:16:41.812 Received shutdown signal, test time was about 10.000000 seconds 00:16:41.812 00:16:41.812 Latency(us) 00:16:41.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.812 =================================================================================================================== 00:16:41.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:41.812 08:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2254109 00:16:43.187 08:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:43.755 08:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:44.322 08:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:44.322 08:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2250173 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2250173 00:16:44.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2250173 Killed "${NVMF_APP[@]}" "$@" 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2255970 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2255970 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2255970 ']' 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.890 08:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:45.151 [2024-07-23 08:29:57.517776] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:45.151 [2024-07-23 08:29:57.518093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.410 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.410 [2024-07-23 08:29:57.855343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.980 [2024-07-23 08:29:58.332640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.980 [2024-07-23 08:29:58.332760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.980 [2024-07-23 08:29:58.332820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.980 [2024-07-23 08:29:58.332872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.980 [2024-07-23 08:29:58.332924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.980 [2024-07-23 08:29:58.333029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.549 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.549 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:46.549 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.549 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.549 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:46.808 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.808 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:47.375 [2024-07-23 08:29:59.606932] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:47.376 [2024-07-23 08:29:59.607433] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:47.376 [2024-07-23 08:29:59.607541] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 67f10cd4-b8ac-455f-a458-d868349f2bc1 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=67f10cd4-b8ac-455f-a458-d868349f2bc1 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:47.376 08:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:47.944 08:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67f10cd4-b8ac-455f-a458-d868349f2bc1 -t 2000 00:16:48.559 [ 00:16:48.559 { 00:16:48.559 "name": "67f10cd4-b8ac-455f-a458-d868349f2bc1", 00:16:48.559 "aliases": [ 00:16:48.559 "lvs/lvol" 00:16:48.559 ], 00:16:48.559 "product_name": "Logical Volume", 00:16:48.559 "block_size": 4096, 00:16:48.559 "num_blocks": 38912, 00:16:48.559 "uuid": "67f10cd4-b8ac-455f-a458-d868349f2bc1", 00:16:48.559 "assigned_rate_limits": { 00:16:48.559 "rw_ios_per_sec": 0, 00:16:48.559 "rw_mbytes_per_sec": 0, 00:16:48.559 "r_mbytes_per_sec": 0, 00:16:48.559 "w_mbytes_per_sec": 0 00:16:48.559 }, 00:16:48.559 "claimed": false, 00:16:48.559 "zoned": false, 00:16:48.559 "supported_io_types": { 00:16:48.559 "read": true, 00:16:48.559 "write": true, 00:16:48.559 "unmap": true, 00:16:48.559 "flush": false, 00:16:48.559 "reset": true, 00:16:48.559 "nvme_admin": false, 00:16:48.559 "nvme_io": false, 00:16:48.559 "nvme_io_md": false, 00:16:48.559 "write_zeroes": true, 00:16:48.559 "zcopy": false, 00:16:48.559 "get_zone_info": false, 00:16:48.559 "zone_management": false, 00:16:48.559 "zone_append": false, 00:16:48.559 "compare": false, 00:16:48.559 "compare_and_write": false, 00:16:48.559 "abort": false, 00:16:48.559 "seek_hole": true, 00:16:48.559 "seek_data": true, 00:16:48.559 "copy": false, 00:16:48.559 "nvme_iov_md": false 00:16:48.559 }, 00:16:48.559 "driver_specific": { 00:16:48.559 "lvol": { 00:16:48.559 "lvol_store_uuid": "9e18c51b-3a9a-4c89-a877-f4bcc0e7546a", 00:16:48.559 "base_bdev": "aio_bdev", 00:16:48.559 "thin_provision": false, 00:16:48.559 "num_allocated_clusters": 38, 00:16:48.559 "snapshot": false, 00:16:48.559 "clone": false, 00:16:48.559 "esnap_clone": false 00:16:48.559 } 00:16:48.559 } 00:16:48.559 } 00:16:48.559 ] 00:16:48.559 08:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:48.559 08:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:48.559 08:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:48.823 08:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:48.823 08:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:48.823 08:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:49.391 08:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:49.391 08:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:49.961 [2024-07-23 08:30:02.433182] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:50.220 08:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:50.788 request: 00:16:50.788 { 00:16:50.788 "uuid": "9e18c51b-3a9a-4c89-a877-f4bcc0e7546a", 00:16:50.788 "method": "bdev_lvol_get_lvstores", 00:16:50.788 "req_id": 1 00:16:50.788 } 00:16:50.788 Got JSON-RPC error response 00:16:50.788 response: 00:16:50.788 { 00:16:50.788 "code": -19, 00:16:50.788 "message": "No such device" 00:16:50.788 } 00:16:50.788 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:50.788 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.788 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.788 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.788 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:51.355 aio_bdev 00:16:51.355 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67f10cd4-b8ac-455f-a458-d868349f2bc1 00:16:51.355 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=67f10cd4-b8ac-455f-a458-d868349f2bc1 00:16:51.355 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.355 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:51.355 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.355 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.355 08:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:51.924 08:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67f10cd4-b8ac-455f-a458-d868349f2bc1 -t 2000 00:16:52.493 [ 00:16:52.493 { 00:16:52.493 "name": "67f10cd4-b8ac-455f-a458-d868349f2bc1", 00:16:52.493 "aliases": [ 00:16:52.493 "lvs/lvol" 00:16:52.493 ], 00:16:52.493 "product_name": "Logical Volume", 00:16:52.493 "block_size": 4096, 00:16:52.493 "num_blocks": 38912, 00:16:52.493 "uuid": "67f10cd4-b8ac-455f-a458-d868349f2bc1", 00:16:52.493 "assigned_rate_limits": { 00:16:52.493 "rw_ios_per_sec": 0, 00:16:52.493 "rw_mbytes_per_sec": 0, 00:16:52.493 "r_mbytes_per_sec": 0, 00:16:52.493 "w_mbytes_per_sec": 0 00:16:52.493 }, 00:16:52.493 "claimed": false, 00:16:52.493 "zoned": false, 00:16:52.493 "supported_io_types": { 00:16:52.493 "read": true, 00:16:52.493 "write": true, 00:16:52.493 "unmap": true, 00:16:52.493 "flush": false, 00:16:52.493 "reset": true, 00:16:52.493 "nvme_admin": false, 00:16:52.493 "nvme_io": false, 00:16:52.493 "nvme_io_md": false, 00:16:52.493 "write_zeroes": true, 00:16:52.493 "zcopy": false, 00:16:52.493 "get_zone_info": false, 00:16:52.493 "zone_management": false, 00:16:52.493 "zone_append": false, 00:16:52.493 "compare": false, 00:16:52.493 "compare_and_write": false, 00:16:52.493 "abort": false, 00:16:52.493 "seek_hole": true, 00:16:52.493 "seek_data": true, 00:16:52.493 "copy": false, 00:16:52.493 "nvme_iov_md": false 00:16:52.493 }, 00:16:52.493 "driver_specific": { 00:16:52.493 "lvol": { 00:16:52.494 "lvol_store_uuid": "9e18c51b-3a9a-4c89-a877-f4bcc0e7546a", 00:16:52.494 "base_bdev": "aio_bdev", 00:16:52.494 "thin_provision": false, 00:16:52.494 "num_allocated_clusters": 38, 00:16:52.494 "snapshot": false, 00:16:52.494 "clone": false, 00:16:52.494 "esnap_clone": false 00:16:52.494 } 00:16:52.494 } 00:16:52.494 } 00:16:52.494 ] 00:16:52.494 08:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:52.494 08:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:52.494 08:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:53.061 08:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:53.061 08:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:53.061 08:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:53.629 08:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:53.629 08:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67f10cd4-b8ac-455f-a458-d868349f2bc1 00:16:54.198 08:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e18c51b-3a9a-4c89-a877-f4bcc0e7546a 00:16:54.765 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:55.332 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.332 00:16:55.332 real 0m32.397s 00:16:55.332 user 1m19.750s 00:16:55.332 sys 0m6.900s 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:55.333 ************************************ 00:16:55.333 END TEST lvs_grow_dirty 00:16:55.333 ************************************ 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:55.333 nvmf_trace.0 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.333 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.333 rmmod nvme_tcp 00:16:55.333 rmmod nvme_fabrics 00:16:55.593 rmmod nvme_keyring 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2255970 ']' 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2255970 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2255970 ']' 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2255970 00:16:55.593 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:55.594 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.594 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2255970 00:16:55.594 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:55.594 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:55.594 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2255970' 00:16:55.594 killing process with pid 2255970 00:16:55.594 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2255970 00:16:55.594 08:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2255970 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.131 08:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.040 00:17:00.040 real 1m10.364s 00:17:00.040 user 2m1.827s 00:17:00.040 sys 0m13.777s 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:00.040 ************************************ 00:17:00.040 END TEST nvmf_lvs_grow 00:17:00.040 ************************************ 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:00.040 ************************************ 00:17:00.040 START TEST nvmf_bdev_io_wait 00:17:00.040 ************************************ 00:17:00.040 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:00.300 * Looking for test storage... 00:17:00.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.300 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.301 08:30:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:03.591 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:03.591 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.591 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:03.592 Found net devices under 0000:84:00.0: cvl_0_0 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:03.592 Found net devices under 0000:84:00.1: cvl_0_1 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:03.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:17:03.592 00:17:03.592 --- 10.0.0.2 ping statistics --- 00:17:03.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.592 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:17:03.592 00:17:03.592 --- 10.0.0.1 ping statistics --- 00:17:03.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.592 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2260044 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2260044 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2260044 ']' 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.592 08:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:03.592 [2024-07-23 08:30:16.085346] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:03.592 [2024-07-23 08:30:16.085533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.851 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.851 [2024-07-23 08:30:16.319644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.418 [2024-07-23 08:30:16.805987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.418 [2024-07-23 08:30:16.806106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.418 [2024-07-23 08:30:16.806167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.418 [2024-07-23 08:30:16.806212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.418 [2024-07-23 08:30:16.806259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.418 [2024-07-23 08:30:16.806442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.418 [2024-07-23 08:30:16.806485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.418 [2024-07-23 08:30:16.806544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.418 [2024-07-23 08:30:16.806556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.704 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.704 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:04.704 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.704 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.704 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.962 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.221 [2024-07-23 08:30:17.556709] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.221 Malloc0 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.221 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.222 [2024-07-23 08:30:17.703739] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2260207 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2260209 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2260211 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.222 { 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme$subsystem", 00:17:05.222 "trtype": "$TEST_TRANSPORT", 00:17:05.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "$NVMF_PORT", 00:17:05.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.222 "hdgst": ${hdgst:-false}, 00:17:05.222 "ddgst": ${ddgst:-false} 00:17:05.222 }, 00:17:05.222 "method": "bdev_nvme_attach_controller" 00:17:05.222 } 00:17:05.222 EOF 00:17:05.222 )") 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.222 { 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme$subsystem", 00:17:05.222 "trtype": "$TEST_TRANSPORT", 00:17:05.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "$NVMF_PORT", 00:17:05.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.222 "hdgst": ${hdgst:-false}, 00:17:05.222 "ddgst": ${ddgst:-false} 00:17:05.222 }, 00:17:05.222 "method": "bdev_nvme_attach_controller" 00:17:05.222 } 00:17:05.222 EOF 00:17:05.222 )") 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2260213 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.222 { 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme$subsystem", 00:17:05.222 "trtype": "$TEST_TRANSPORT", 00:17:05.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "$NVMF_PORT", 00:17:05.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.222 "hdgst": ${hdgst:-false}, 00:17:05.222 "ddgst": ${ddgst:-false} 00:17:05.222 }, 00:17:05.222 "method": "bdev_nvme_attach_controller" 00:17:05.222 } 00:17:05.222 EOF 00:17:05.222 )") 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:05.222 { 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme$subsystem", 00:17:05.222 "trtype": "$TEST_TRANSPORT", 00:17:05.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "$NVMF_PORT", 00:17:05.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.222 "hdgst": ${hdgst:-false}, 00:17:05.222 "ddgst": ${ddgst:-false} 00:17:05.222 }, 00:17:05.222 "method": "bdev_nvme_attach_controller" 00:17:05.222 } 00:17:05.222 EOF 00:17:05.222 )") 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2260207 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme1", 00:17:05.222 "trtype": "tcp", 00:17:05.222 "traddr": "10.0.0.2", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "4420", 00:17:05.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.222 "hdgst": false, 00:17:05.222 "ddgst": false 00:17:05.222 }, 00:17:05.222 "method": "bdev_nvme_attach_controller" 00:17:05.222 }' 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme1", 00:17:05.222 "trtype": "tcp", 00:17:05.222 "traddr": "10.0.0.2", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "4420", 00:17:05.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.222 "hdgst": false, 00:17:05.222 "ddgst": false 00:17:05.222 }, 00:17:05.222 "method": "bdev_nvme_attach_controller" 00:17:05.222 }' 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme1", 00:17:05.222 "trtype": "tcp", 00:17:05.222 "traddr": "10.0.0.2", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "4420", 00:17:05.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.222 "hdgst": false, 00:17:05.222 "ddgst": false 00:17:05.222 }, 00:17:05.222 "method": "bdev_nvme_attach_controller" 00:17:05.222 }' 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:05.222 08:30:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:05.222 "params": { 00:17:05.222 "name": "Nvme1", 00:17:05.222 "trtype": "tcp", 00:17:05.222 "traddr": "10.0.0.2", 00:17:05.222 "adrfam": "ipv4", 00:17:05.222 "trsvcid": "4420", 00:17:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.223 "hdgst": false, 00:17:05.223 "ddgst": false 00:17:05.223 }, 00:17:05.223 "method": "bdev_nvme_attach_controller" 00:17:05.223 }' 00:17:05.481 [2024-07-23 08:30:17.813495] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:05.481 [2024-07-23 08:30:17.813497] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:05.481 [2024-07-23 08:30:17.813685] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 08:30:17.813687] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:05.481 --proc-type=auto ] 00:17:05.481 [2024-07-23 08:30:17.814011] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:05.481 [2024-07-23 08:30:17.814288] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:05.481 [2024-07-23 08:30:17.815240] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:05.481 [2024-07-23 08:30:17.815436] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:05.481 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.739 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.739 [2024-07-23 08:30:18.105085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.739 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.739 [2024-07-23 08:30:18.256035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.997 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.997 [2024-07-23 08:30:18.392944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:05.997 [2024-07-23 08:30:18.404459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.256 [2024-07-23 08:30:18.553468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:06.256 [2024-07-23 08:30:18.564766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.256 [2024-07-23 08:30:18.702029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.514 [2024-07-23 08:30:18.862874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.514 Running I/O for 1 seconds... 00:17:06.772 Running I/O for 1 seconds... 00:17:07.031 Running I/O for 1 seconds... 00:17:07.031 Running I/O for 1 seconds... 00:17:07.598 00:17:07.598 Latency(us) 00:17:07.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.598 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:07.598 Nvme1n1 : 1.01 6957.66 27.18 0.00 0.00 18286.84 4708.88 26602.76 00:17:07.598 =================================================================================================================== 00:17:07.598 Total : 6957.66 27.18 0.00 0.00 18286.84 4708.88 26602.76 00:17:07.856 00:17:07.856 Latency(us) 00:17:07.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.856 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:07.856 Nvme1n1 : 1.00 110117.38 430.15 0.00 0.00 1157.97 470.28 2342.31 00:17:07.856 =================================================================================================================== 00:17:07.856 Total : 110117.38 430.15 0.00 0.00 1157.97 470.28 2342.31 00:17:07.856 00:17:07.856 Latency(us) 00:17:07.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.857 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:07.857 Nvme1n1 : 1.02 4917.60 19.21 0.00 0.00 25799.74 9903.22 35729.26 00:17:07.857 =================================================================================================================== 00:17:07.857 Total : 4917.60 19.21 0.00 0.00 25799.74 9903.22 35729.26 00:17:08.114 00:17:08.115 Latency(us) 00:17:08.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.115 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:08.115 Nvme1n1 : 1.01 4906.23 19.16 0.00 0.00 25883.73 4126.34 38836.15 00:17:08.115 =================================================================================================================== 00:17:08.115 Total : 4906.23 19.16 0.00 0.00 25883.73 4126.34 38836.15 00:17:09.489 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2260209 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2260211 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2260213 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.490 rmmod nvme_tcp 00:17:09.490 rmmod nvme_fabrics 00:17:09.490 rmmod nvme_keyring 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2260044 ']' 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2260044 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2260044 ']' 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2260044 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2260044 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2260044' 00:17:09.490 killing process with pid 2260044 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2260044 00:17:09.490 08:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2260044 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.396 08:30:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.930 00:17:13.930 real 0m13.372s 00:17:13.930 user 0m38.219s 00:17:13.930 sys 0m5.986s 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.930 ************************************ 00:17:13.930 END TEST nvmf_bdev_io_wait 00:17:13.930 ************************************ 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:13.930 ************************************ 00:17:13.930 START TEST nvmf_queue_depth 00:17:13.930 ************************************ 00:17:13.930 08:30:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:13.930 * Looking for test storage... 00:17:13.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.930 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:13.931 08:30:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:17.225 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:17.225 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.225 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:17.226 Found net devices under 0000:84:00.0: cvl_0_0 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:17.226 Found net devices under 0000:84:00.1: cvl_0_1 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:17:17.226 00:17:17.226 --- 10.0.0.2 ping statistics --- 00:17:17.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.226 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:17:17.226 00:17:17.226 --- 10.0.0.1 ping statistics --- 00:17:17.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.226 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2262969 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2262969 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2262969 ']' 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.226 08:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:17.226 [2024-07-23 08:30:29.569716] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:17.226 [2024-07-23 08:30:29.570026] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.485 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.485 [2024-07-23 08:30:29.855673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.743 [2024-07-23 08:30:30.176465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.743 [2024-07-23 08:30:30.176550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.743 [2024-07-23 08:30:30.176585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.743 [2024-07-23 08:30:30.176615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.743 [2024-07-23 08:30:30.176641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.743 [2024-07-23 08:30:30.176699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.309 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.309 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:18.309 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.309 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.309 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.309 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.309 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.310 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.310 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.310 [2024-07-23 08:30:30.816517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.310 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.310 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.310 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.310 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.568 Malloc0 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.568 [2024-07-23 08:30:30.956166] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2263129 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2263129 /var/tmp/bdevperf.sock 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2263129 ']' 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.568 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.569 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.569 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.569 08:30:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.569 [2024-07-23 08:30:31.066568] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:18.569 [2024-07-23 08:30:31.066758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263129 ] 00:17:18.827 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.827 [2024-07-23 08:30:31.245607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.085 [2024-07-23 08:30:31.558436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.459 08:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.459 08:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:20.459 08:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.459 08:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.459 08:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:20.459 NVMe0n1 00:17:20.459 08:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.459 08:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:20.717 Running I/O for 10 seconds... 00:17:32.979 00:17:32.979 Latency(us) 00:17:32.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.979 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:32.979 Verification LBA range: start 0x0 length 0x4000 00:17:32.979 NVMe0n1 : 10.13 4945.99 19.32 0.00 0.00 205682.21 30292.20 124275.67 00:17:32.979 =================================================================================================================== 00:17:32.979 Total : 4945.99 19.32 0.00 0.00 205682.21 30292.20 124275.67 00:17:32.979 0 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2263129 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2263129 ']' 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2263129 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2263129 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2263129' 00:17:32.979 killing process with pid 2263129 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2263129 00:17:32.979 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.979 00:17:32.979 Latency(us) 00:17:32.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.979 =================================================================================================================== 00:17:32.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.979 08:30:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2263129 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.979 rmmod nvme_tcp 00:17:32.979 rmmod nvme_fabrics 00:17:32.979 rmmod nvme_keyring 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2262969 ']' 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2262969 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2262969 ']' 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2262969 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2262969 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2262969' 00:17:32.979 killing process with pid 2262969 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2262969 00:17:32.979 08:30:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2262969 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.355 08:30:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:36.262 00:17:36.262 real 0m22.791s 00:17:36.262 user 0m31.962s 00:17:36.262 sys 0m4.816s 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:36.262 ************************************ 00:17:36.262 END TEST nvmf_queue_depth 00:17:36.262 ************************************ 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.262 08:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:36.523 ************************************ 00:17:36.523 START TEST nvmf_target_multipath 00:17:36.523 ************************************ 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:36.523 * Looking for test storage... 00:17:36.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.523 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.524 08:30:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:39.815 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:39.815 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:39.815 Found net devices under 0000:84:00.0: cvl_0_0 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.815 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:39.816 Found net devices under 0000:84:00.1: cvl_0_1 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.816 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.075 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.075 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.075 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:17:40.076 00:17:40.076 --- 10.0.0.2 ping statistics --- 00:17:40.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.076 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:17:40.076 00:17:40.076 --- 10.0.0.1 ping statistics --- 00:17:40.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.076 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:40.076 only one NIC for nvmf test 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.076 rmmod nvme_tcp 00:17:40.076 rmmod nvme_fabrics 00:17:40.076 rmmod nvme_keyring 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.076 08:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:42.609 00:17:42.609 real 0m5.855s 00:17:42.609 user 0m1.143s 00:17:42.609 sys 0m2.721s 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:42.609 ************************************ 00:17:42.609 END TEST nvmf_target_multipath 00:17:42.609 ************************************ 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:42.609 ************************************ 00:17:42.609 START TEST nvmf_zcopy 00:17:42.609 ************************************ 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:42.609 * Looking for test storage... 00:17:42.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.609 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:42.610 08:30:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:45.902 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:45.902 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.902 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:45.903 Found net devices under 0000:84:00.0: cvl_0_0 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:45.903 Found net devices under 0000:84:00.1: cvl_0_1 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:45.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:17:45.903 00:17:45.903 --- 10.0.0.2 ping statistics --- 00:17:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.903 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:17:45.903 00:17:45.903 --- 10.0.0.1 ping statistics --- 00:17:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.903 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2268983 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2268983 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2268983 ']' 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.903 08:30:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.162 [2024-07-23 08:30:58.474841] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:46.162 [2024-07-23 08:30:58.475022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.162 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.420 [2024-07-23 08:30:58.684115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.678 [2024-07-23 08:30:59.002367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.678 [2024-07-23 08:30:59.002452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.678 [2024-07-23 08:30:59.002486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.678 [2024-07-23 08:30:59.002516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.678 [2024-07-23 08:30:59.002545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.678 [2024-07-23 08:30:59.002603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.611 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.611 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:47.611 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.611 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.611 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 [2024-07-23 08:31:00.158253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 [2024-07-23 08:31:00.174544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 malloc0 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.872 { 00:17:47.872 "params": { 00:17:47.872 "name": "Nvme$subsystem", 00:17:47.872 "trtype": "$TEST_TRANSPORT", 00:17:47.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.872 "adrfam": "ipv4", 00:17:47.872 "trsvcid": "$NVMF_PORT", 00:17:47.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.872 "hdgst": ${hdgst:-false}, 00:17:47.872 "ddgst": ${ddgst:-false} 00:17:47.872 }, 00:17:47.872 "method": "bdev_nvme_attach_controller" 00:17:47.872 } 00:17:47.872 EOF 00:17:47.872 )") 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:47.872 08:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.872 "params": { 00:17:47.872 "name": "Nvme1", 00:17:47.872 "trtype": "tcp", 00:17:47.872 "traddr": "10.0.0.2", 00:17:47.872 "adrfam": "ipv4", 00:17:47.872 "trsvcid": "4420", 00:17:47.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.872 "hdgst": false, 00:17:47.872 "ddgst": false 00:17:47.872 }, 00:17:47.872 "method": "bdev_nvme_attach_controller" 00:17:47.872 }' 00:17:48.165 [2024-07-23 08:31:00.434374] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:48.165 [2024-07-23 08:31:00.434662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269144 ] 00:17:48.165 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.423 [2024-07-23 08:31:00.690760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.682 [2024-07-23 08:31:01.005251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.248 Running I/O for 10 seconds... 00:17:59.215 00:17:59.215 Latency(us) 00:17:59.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.215 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:59.215 Verification LBA range: start 0x0 length 0x1000 00:17:59.215 Nvme1n1 : 10.02 3453.82 26.98 0.00 0.00 36950.31 4975.88 50098.63 00:17:59.215 =================================================================================================================== 00:17:59.215 Total : 3453.82 26.98 0.00 0.00 36950.31 4975.88 50098.63 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2270585 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:00.600 { 00:18:00.600 "params": { 00:18:00.600 "name": "Nvme$subsystem", 00:18:00.600 "trtype": "$TEST_TRANSPORT", 00:18:00.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.600 "adrfam": "ipv4", 00:18:00.600 "trsvcid": "$NVMF_PORT", 00:18:00.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.600 "hdgst": ${hdgst:-false}, 00:18:00.600 "ddgst": ${ddgst:-false} 00:18:00.600 }, 00:18:00.600 "method": "bdev_nvme_attach_controller" 00:18:00.600 } 00:18:00.600 EOF 00:18:00.600 )") 00:18:00.600 [2024-07-23 08:31:12.961764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.600 [2024-07-23 08:31:12.961835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:00.600 [2024-07-23 08:31:12.969644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.600 [2024-07-23 08:31:12.969689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:00.600 08:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:00.600 "params": { 00:18:00.600 "name": "Nvme1", 00:18:00.600 "trtype": "tcp", 00:18:00.600 "traddr": "10.0.0.2", 00:18:00.600 "adrfam": "ipv4", 00:18:00.600 "trsvcid": "4420", 00:18:00.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.600 "hdgst": false, 00:18:00.600 "ddgst": false 00:18:00.600 }, 00:18:00.600 "method": "bdev_nvme_attach_controller" 00:18:00.600 }' 00:18:00.600 [2024-07-23 08:31:12.977756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.600 [2024-07-23 08:31:12.977800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.600 [2024-07-23 08:31:12.985701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.600 [2024-07-23 08:31:12.985743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.600 [2024-07-23 08:31:12.993737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.600 [2024-07-23 08:31:12.993789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.600 [2024-07-23 08:31:13.001756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.600 [2024-07-23 08:31:13.001800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.600 [2024-07-23 08:31:13.009786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.600 [2024-07-23 08:31:13.009830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.600 [2024-07-23 08:31:13.017779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.017821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.025813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.025853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.033811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.033851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.041865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.041906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.049890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.049932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.057887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.057930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.064945] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:00.601 [2024-07-23 08:31:13.065105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270585 ] 00:18:00.601 [2024-07-23 08:31:13.065932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.065973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.073955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.073995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.085976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.086018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.094036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.094077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.102013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.102053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.110069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.110109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.601 [2024-07-23 08:31:13.118091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.601 [2024-07-23 08:31:13.118132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.859 [2024-07-23 08:31:13.126092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.859 [2024-07-23 08:31:13.126147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.859 [2024-07-23 08:31:13.134142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.859 [2024-07-23 08:31:13.134192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.859 [2024-07-23 08:31:13.142162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.859 [2024-07-23 08:31:13.142202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.859 [2024-07-23 08:31:13.150163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.859 [2024-07-23 08:31:13.150203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.859 [2024-07-23 08:31:13.158216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.859 [2024-07-23 08:31:13.158257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.859 [2024-07-23 08:31:13.166206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.859 [2024-07-23 08:31:13.166247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.859 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.859 [2024-07-23 08:31:13.174258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.174299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.182277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.182327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.190320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.190359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.198343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.198383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.206357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.206396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.214361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.214401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.222408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.222450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.230410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.230451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.238456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.238497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.241227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.860 [2024-07-23 08:31:13.246473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.246514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.254536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.254590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.262575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.262629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.270551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.270591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.278545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.278593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.286618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.286659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.294596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.294636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.302649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.302690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.310661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.310701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.318666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.318706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.326706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.326746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.334731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.334772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.342743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.342783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.350789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.350828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.358784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.358825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.366830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.366871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.860 [2024-07-23 08:31:13.374860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.860 [2024-07-23 08:31:13.374900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.382890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.382932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.390909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.390951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.398924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.398965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.406951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.406999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.415029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.415083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.422985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.423025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.431019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.431068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.439044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.439084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.447050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.447090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.118 [2024-07-23 08:31:13.455096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.118 [2024-07-23 08:31:13.455136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.463113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.463155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.471128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.471167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.479193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.479236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.487168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.487208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.495207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.495248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.503236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.503277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.511232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.511271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.519277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.519327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.527306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.527356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.535318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.535360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.543357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.543397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.551364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.551406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.555573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.119 [2024-07-23 08:31:13.559414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.559456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.567442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.567483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.575508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.575563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.583523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.583579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.591512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.591553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.599501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.599542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.607546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.607586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.615547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.615586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.623598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.623639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.631625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.631666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.119 [2024-07-23 08:31:13.639624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.119 [2024-07-23 08:31:13.639679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.647682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.647725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.655701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.655743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.663757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.663812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.671826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.671881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.679796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.679851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.687842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.687896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.695861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.695916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.703818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.703859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.711894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.711935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.719917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.719957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.727886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.727927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.735936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.735978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.743939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.743979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.752072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.752113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.760000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.760041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.768010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.768050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.776050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.776091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.784075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.784115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.792081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.792121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.800123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.800163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.808122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.808163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.816173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.816214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.824194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.824234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.832193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.832233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.840236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.840276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.848261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.848301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.856333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.856397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.864411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.864464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.872380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.872432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.880386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.880436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.888401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.888440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.377 [2024-07-23 08:31:13.896405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.377 [2024-07-23 08:31:13.896447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.904458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.904499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.912466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.912506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.920462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.920502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.928517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.928557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.936509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.936549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.944553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.944593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.952575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.952614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.960608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.960648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.968631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.968670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.976652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.976691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.984666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.984706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:13.992713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:13.992756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.000702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.000743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.008766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.008810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.016788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.016832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.024790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.024834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.032835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.032888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.040856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.040899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.048862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.048907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.056931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.056974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.064907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.064950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.073054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.073100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.081083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.081126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 Running I/O for 5 seconds... 00:18:01.636 [2024-07-23 08:31:14.089232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.089278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.110012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.110062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.127976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.128024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.636 [2024-07-23 08:31:14.145741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.636 [2024-07-23 08:31:14.145790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.164118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.164167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.183179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.183228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.202198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.202246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.220784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.220832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.238888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.238937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.257020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.257068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.275370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.275418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.294194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.294243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.312373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.312431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.331002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.331051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.348757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.348805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.366965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.367014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.385504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.385552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.894 [2024-07-23 08:31:14.403324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.894 [2024-07-23 08:31:14.403372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.422376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.422425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.441344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.441392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.460358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.460406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.478620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.478668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.496847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.496896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.515194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.515242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.533761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.533810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.552508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.552556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.571096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.571144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.589969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.590016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.608285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.608348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.627675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.627724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.646046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.646094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.153 [2024-07-23 08:31:14.665023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.153 [2024-07-23 08:31:14.665072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.683963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.684012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.702964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.703012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.722027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.722076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.740755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.740803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.760303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.760368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.779051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.779100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.797305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.797364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.815636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.815684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.834283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.834344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.852123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.852171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.870973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.871023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.888855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.888904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.907864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.907915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.412 [2024-07-23 08:31:14.926840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.412 [2024-07-23 08:31:14.926890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:14.946061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:14.946110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:14.965132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:14.965181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:14.983028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:14.983078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.001819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.001868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.020692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.020742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.039515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.039565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.058064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.058113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.077889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.077938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.092649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.092698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.111295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.111355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.129770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.129819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.148020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.148069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.166665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.166713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.670 [2024-07-23 08:31:15.184948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.670 [2024-07-23 08:31:15.184998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.204767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.204818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.223966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.224015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.242694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.242742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.262028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.262077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.280450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.280501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.300015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.300064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.318851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.318900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.337496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.337544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.356422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.356471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.375030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.375079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.393344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.393392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.411450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.411498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.430042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.430090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.929 [2024-07-23 08:31:15.449165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.929 [2024-07-23 08:31:15.449214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.469048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.469097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.487611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.487661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.506999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.507049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.526469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.526518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.546396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.546445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.560110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.560158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.578903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.578952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.598187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.598236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.617204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.617253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.637214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.637263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.656046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.656095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.187 [2024-07-23 08:31:15.674636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.187 [2024-07-23 08:31:15.674685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.188 [2024-07-23 08:31:15.693726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.188 [2024-07-23 08:31:15.693775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.712345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.712400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.730276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.730336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.748818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.748866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.767481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.767530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.785941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.785989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.805463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.805512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.824415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.824463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.842783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.842831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.861341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.861407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.880434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.880482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.899291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.899352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.918840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.918889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.937125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.937173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.446 [2024-07-23 08:31:15.955320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.446 [2024-07-23 08:31:15.955368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:15.974883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:15.974931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:15.993348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:15.993396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.011867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.011916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.030630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.030678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.049799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.049848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.068499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.068557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.086617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.086667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.104975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.105024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.122983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.123032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.141675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.141725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.159852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.159902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.178938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.178988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.197406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.197455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.704 [2024-07-23 08:31:16.215983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.704 [2024-07-23 08:31:16.216032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.234479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.234528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.254286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.254357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.273755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.273804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.292121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.292170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.310128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.310176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.328917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.328966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.347762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.347810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.366803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.366851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.386189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.386237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.404618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.404666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.422621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.422678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.441671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.441720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.460218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.460266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.964 [2024-07-23 08:31:16.478571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.964 [2024-07-23 08:31:16.478626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.497257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.497305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.515280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.515340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.534681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.534729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.553490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.553537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.571845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.571894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.590206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.590254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.608278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.608336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.626563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.626611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.645228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.645279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.664623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.664671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.683666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.683713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.702734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.702783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.720972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.721020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.223 [2024-07-23 08:31:16.738857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.223 [2024-07-23 08:31:16.738906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.481 [2024-07-23 08:31:16.757420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.481 [2024-07-23 08:31:16.757468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.481 [2024-07-23 08:31:16.775285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.481 [2024-07-23 08:31:16.775355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.481 [2024-07-23 08:31:16.793852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.481 [2024-07-23 08:31:16.793900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.481 [2024-07-23 08:31:16.811552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.481 [2024-07-23 08:31:16.811600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.481 [2024-07-23 08:31:16.829874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.481 [2024-07-23 08:31:16.829922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.848890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.848938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.867654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.867703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.886242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.886290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.904993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.905043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.923265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.923323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.941628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.941676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.960768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.960815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.979853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.979901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.482 [2024-07-23 08:31:16.998433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.482 [2024-07-23 08:31:16.998488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.017569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.017618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.037090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.037140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.055484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.055533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.073861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.073909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.092016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.092064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.110638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.110686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.129470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.129529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.147493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.147543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.166072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.166120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.184913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.184963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.203972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.204020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.222879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.222927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.241736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.241784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.741 [2024-07-23 08:31:17.260065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.741 [2024-07-23 08:31:17.260113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.279335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.279384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.297750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.297799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.316021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.316071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.334035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.334084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.353443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.353492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.371625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.371675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.390420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.390469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.409282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.409344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.427608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.427657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.445213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.445262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.464434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.464483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.483223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.483271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.999 [2024-07-23 08:31:17.502420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.999 [2024-07-23 08:31:17.502469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.257 [2024-07-23 08:31:17.522493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.257 [2024-07-23 08:31:17.522543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.257 [2024-07-23 08:31:17.540937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.540986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.560547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.560605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.579741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.579789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.598590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.598639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.617126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.617175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.636884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.636932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.655367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.655414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.674250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.674300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.692806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.692855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.710885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.710934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.728596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.728645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.746691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.746739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.258 [2024-07-23 08:31:17.765735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.258 [2024-07-23 08:31:17.765784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.784382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.784431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.803809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.803858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.822724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.822774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.841492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.841541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.859618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.859667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.877686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.877734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.896200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.896248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.914260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.914320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.932434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.932482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.950769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.950817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.969181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.969227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:17.987280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:17.987338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:18.004979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:18.005026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.517 [2024-07-23 08:31:18.023596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.517 [2024-07-23 08:31:18.023643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.042640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.042687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.060823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.060869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.078717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.078764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.097495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.097541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.116326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.116373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.135204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.135251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.153841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.153888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.172644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.172690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.191036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.191083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.209049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.209096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.227361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.227415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.245735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.245781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.264267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.264325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.776 [2024-07-23 08:31:18.283123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.776 [2024-07-23 08:31:18.283170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.301792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.301840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.320481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.320529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.338983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.339030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.357028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.357075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.375569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.375616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.393939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.393986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.412230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.412276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.431441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.431488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.449743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.449790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.468660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.468707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.487947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.487993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.507176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.507223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.525809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.525856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.035 [2024-07-23 08:31:18.544162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.035 [2024-07-23 08:31:18.544208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.563339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.563386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.582626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.582674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.601131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.601179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.619890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.619937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.637540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.637588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.656446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.656495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.675448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.675497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.694710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.694761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.713723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.713770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.733703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.733752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.750417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.750464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.769390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.769438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.789059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.789107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.293 [2024-07-23 08:31:18.808014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.293 [2024-07-23 08:31:18.808062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.827215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.827265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.846284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.846343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.864191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.864239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.882895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.882952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.901677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.901723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.919933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.919980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.938705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.938752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.957008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.957055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.976475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.976522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:18.995147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:18.995193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:19.013823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:19.013870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:19.033475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:19.033522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:19.052468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:19.052513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.552 [2024-07-23 08:31:19.070174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.552 [2024-07-23 08:31:19.070221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.088975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.089025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.107683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.107731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.117141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.117189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 00:18:06.810 Latency(us) 00:18:06.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.810 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:06.810 Nvme1n1 : 5.02 6802.25 53.14 0.00 0.00 18778.56 5291.43 32039.82 00:18:06.810 =================================================================================================================== 00:18:06.810 Total : 6802.25 53.14 0.00 0.00 18778.56 5291.43 32039.82 00:18:06.810 [2024-07-23 08:31:19.123398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.123442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.131490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.131536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.139446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.139499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.147496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.147538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.155449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.155490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.163495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.163537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.171500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.810 [2024-07-23 08:31:19.171542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.810 [2024-07-23 08:31:19.179649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.179713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.187682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.187750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.195615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.195669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.203632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.203674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.211651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.211692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.219648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.219688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.227714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.227755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.235700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.235740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.243735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.243776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.251754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.251794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.259766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.259807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.267807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.267847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.275829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.275869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.283909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.283970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.291984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.292057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.299956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.300018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.307938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.307979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.315949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.315992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.811 [2024-07-23 08:31:19.323986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.811 [2024-07-23 08:31:19.324028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.332007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.332056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.340033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.340080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.348019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.348061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.356071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.356112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.364070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.364111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.372116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.372157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.380143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.380183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.388143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.388183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.396188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.396227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.404208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.404249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.412208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.412248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.420279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.420330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.428255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.428295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.436317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.436356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.452375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.452425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.460361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.460400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.468414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.468453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.476481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.476534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.484542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.484608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.492512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.492554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.500483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.500523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.508533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.508573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.516563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.516603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.524586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.524627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.532604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.532644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.540703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.540743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.548623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.548662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.556775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.556839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.564764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.069 [2024-07-23 08:31:19.564828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.069 [2024-07-23 08:31:19.572854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.070 [2024-07-23 08:31:19.572921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.070 [2024-07-23 08:31:19.580854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.070 [2024-07-23 08:31:19.580921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.070 [2024-07-23 08:31:19.588828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.070 [2024-07-23 08:31:19.588880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.328 [2024-07-23 08:31:19.596801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.328 [2024-07-23 08:31:19.596846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.604820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.604861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.612813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.612854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.620878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.620919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.628868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.628910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.636906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.636946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.644929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.644969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.652930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.652970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.661010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.661051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.669030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.669071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.677003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.677042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.685051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.685092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.693059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.693099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.701102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.701142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.709129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.709170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.717132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.717172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.725175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.725215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.733210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.733250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.741204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.741245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.749246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.749286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.757285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.757335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.765295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.765344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.773333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.773376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.781409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.781469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.789465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.789530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.797402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.797445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.805405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.805448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.813487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.813528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.821459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.821501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.829494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.829536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.837514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.837555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.329 [2024-07-23 08:31:19.845516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.329 [2024-07-23 08:31:19.845557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.853570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.853613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.861618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.861667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.869596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.869644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.877645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.877687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.885627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.885668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.893675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.893716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.901713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.901754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.909735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.909778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.917771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.917813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.925772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.925815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.933796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.933845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.941936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.942000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.949826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.949868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.957867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.957909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.965886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.965927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.973894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.973937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.981930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.981974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.989953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.989993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:19.997956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:19.997996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.006026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.006066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.018052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.018100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.026095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.026138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.034117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.034159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.042100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.042140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.050162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.050204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.058260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.058335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.066184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.066227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.074220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.074260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.082221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.082261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.090265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.090305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.098295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.098344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.106357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.106400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.611 [2024-07-23 08:31:20.114381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.611 [2024-07-23 08:31:20.114430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.122383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.122426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.130376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.130417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.138421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.138462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.146425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.146465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.154471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.154512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.162485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.162525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.170492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.170531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.178548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.178588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.186610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.186659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.194666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.194730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.202656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.202696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.210621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.210670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.218660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.218700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.226679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.226718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.234691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.234731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.242734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.242774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.250838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.250900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.258800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.258853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.266801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.266841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.274814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.274853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.282846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.282886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.290867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.290906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.298901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.298941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.306914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.306954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.314934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.314973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.323085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.323123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.331008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.331047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.338981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.339020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.347028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.347069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.355059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.355099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.363061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.876 [2024-07-23 08:31:20.363112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.876 [2024-07-23 08:31:20.371109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.877 [2024-07-23 08:31:20.371149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.877 [2024-07-23 08:31:20.379124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.877 [2024-07-23 08:31:20.379164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.877 [2024-07-23 08:31:20.387124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.877 [2024-07-23 08:31:20.387164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.877 [2024-07-23 08:31:20.395201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.877 [2024-07-23 08:31:20.395241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.135 [2024-07-23 08:31:20.403173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.135 [2024-07-23 08:31:20.403213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.135 [2024-07-23 08:31:20.411232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.135 [2024-07-23 08:31:20.411276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.135 [2024-07-23 08:31:20.419244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.135 [2024-07-23 08:31:20.419284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.135 [2024-07-23 08:31:20.427244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.135 [2024-07-23 08:31:20.427284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.135 [2024-07-23 08:31:20.435298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.135 [2024-07-23 08:31:20.435348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2270585) - No such process 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2270585 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:08.135 delay0 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.135 08:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:08.135 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.393 [2024-07-23 08:31:20.688539] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:16.506 Initializing NVMe Controllers 00:18:16.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:16.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:16.506 Initialization complete. Launching workers. 00:18:16.506 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 12452 00:18:16.506 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12565, failed to submit 125 00:18:16.506 success 12494, unsuccess 71, failed 0 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.506 rmmod nvme_tcp 00:18:16.506 rmmod nvme_fabrics 00:18:16.506 rmmod nvme_keyring 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2268983 ']' 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2268983 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2268983 ']' 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2268983 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2268983 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2268983' 00:18:16.506 killing process with pid 2268983 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2268983 00:18:16.506 08:31:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2268983 00:18:17.440 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.441 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.441 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.441 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.441 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.441 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.441 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.441 08:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.346 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.346 00:18:19.346 real 0m37.011s 00:18:19.346 user 0m53.308s 00:18:19.346 sys 0m11.172s 00:18:19.346 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.346 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:19.346 ************************************ 00:18:19.347 END TEST nvmf_zcopy 00:18:19.347 ************************************ 00:18:19.347 08:31:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:18:19.347 08:31:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:19.347 08:31:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:19.347 08:31:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.347 08:31:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:19.347 ************************************ 00:18:19.347 START TEST nvmf_nmic 00:18:19.347 ************************************ 00:18:19.347 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:19.606 * Looking for test storage... 00:18:19.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.606 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.607 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.607 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:19.607 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:19.607 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.607 08:31:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.898 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:22.899 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:22.899 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:22.899 Found net devices under 0000:84:00.0: cvl_0_0 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:22.899 Found net devices under 0000:84:00.1: cvl_0_1 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.899 08:31:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:22.899 00:18:22.899 --- 10.0.0.2 ping statistics --- 00:18:22.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.899 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:18:22.899 00:18:22.899 --- 10.0.0.1 ping statistics --- 00:18:22.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.899 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2274394 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2274394 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2274394 ']' 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.899 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.900 08:31:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:22.900 [2024-07-23 08:31:35.274165] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:22.900 [2024-07-23 08:31:35.274367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.900 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.159 [2024-07-23 08:31:35.504177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:23.727 [2024-07-23 08:31:35.972700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.727 [2024-07-23 08:31:35.972826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.727 [2024-07-23 08:31:35.972888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.727 [2024-07-23 08:31:35.972936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.727 [2024-07-23 08:31:35.972985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.727 [2024-07-23 08:31:35.973213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.727 [2024-07-23 08:31:35.973276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.727 [2024-07-23 08:31:35.973345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.727 [2024-07-23 08:31:35.973355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.293 [2024-07-23 08:31:36.692216] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.293 Malloc0 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.293 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.551 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.551 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.551 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.552 [2024-07-23 08:31:36.825246] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:24.552 test case1: single bdev can't be used in multiple subsystems 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.552 [2024-07-23 08:31:36.849019] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:24.552 [2024-07-23 08:31:36.849084] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:24.552 [2024-07-23 08:31:36.849118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.552 request: 00:18:24.552 { 00:18:24.552 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:24.552 "namespace": { 00:18:24.552 "bdev_name": "Malloc0", 00:18:24.552 "no_auto_visible": false 00:18:24.552 }, 00:18:24.552 "method": "nvmf_subsystem_add_ns", 00:18:24.552 "req_id": 1 00:18:24.552 } 00:18:24.552 Got JSON-RPC error response 00:18:24.552 response: 00:18:24.552 { 00:18:24.552 "code": -32602, 00:18:24.552 "message": "Invalid parameters" 00:18:24.552 } 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:24.552 Adding namespace failed - expected result. 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:24.552 test case2: host connect to nvmf target in multiple paths 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.552 [2024-07-23 08:31:36.861241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.552 08:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:25.118 08:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:26.052 08:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:26.052 08:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:26.052 08:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.052 08:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:26.052 08:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:27.951 08:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:27.951 08:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:27.951 08:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.951 08:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:27.951 08:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.951 08:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:27.951 08:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:27.951 [global] 00:18:27.951 thread=1 00:18:27.951 invalidate=1 00:18:27.951 rw=write 00:18:27.951 time_based=1 00:18:27.951 runtime=1 00:18:27.951 ioengine=libaio 00:18:27.951 direct=1 00:18:27.951 bs=4096 00:18:27.951 iodepth=1 00:18:27.951 norandommap=0 00:18:27.951 numjobs=1 00:18:27.951 00:18:27.951 verify_dump=1 00:18:27.951 verify_backlog=512 00:18:27.951 verify_state_save=0 00:18:27.951 do_verify=1 00:18:27.951 verify=crc32c-intel 00:18:27.951 [job0] 00:18:27.951 filename=/dev/nvme0n1 00:18:27.951 Could not set queue depth (nvme0n1) 00:18:28.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:28.209 fio-3.35 00:18:28.209 Starting 1 thread 00:18:29.183 00:18:29.183 job0: (groupid=0, jobs=1): err= 0: pid=2275157: Tue Jul 23 08:31:41 2024 00:18:29.183 read: IOPS=338, BW=1355KiB/s (1387kB/s)(1356KiB/1001msec) 00:18:29.183 slat (nsec): min=8352, max=36306, avg=14597.44, stdev=7914.37 00:18:29.183 clat (usec): min=289, max=41466, avg=2323.83, stdev=8617.73 00:18:29.183 lat (usec): min=297, max=41488, avg=2338.43, stdev=8621.25 00:18:29.183 clat percentiles (usec): 00:18:29.183 | 1.00th=[ 302], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 351], 00:18:29.183 | 30.00th=[ 363], 40.00th=[ 379], 50.00th=[ 396], 60.00th=[ 416], 00:18:29.183 | 70.00th=[ 449], 80.00th=[ 490], 90.00th=[ 537], 95.00th=[ 848], 00:18:29.183 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:29.183 | 99.99th=[41681] 00:18:29.183 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:29.183 slat (nsec): min=9524, max=89046, avg=28639.69, stdev=13456.51 00:18:29.183 clat (usec): min=217, max=507, avg=366.11, stdev=63.59 00:18:29.183 lat (usec): min=227, max=540, avg=394.75, stdev=70.17 00:18:29.183 clat percentiles (usec): 00:18:29.183 | 1.00th=[ 221], 5.00th=[ 237], 10.00th=[ 269], 20.00th=[ 322], 00:18:29.183 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 371], 60.00th=[ 388], 00:18:29.183 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 441], 95.00th=[ 453], 00:18:29.183 | 99.00th=[ 482], 99.50th=[ 486], 99.90th=[ 506], 99.95th=[ 506], 00:18:29.183 | 99.99th=[ 506] 00:18:29.183 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:29.183 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:29.183 lat (usec) : 250=4.58%, 500=88.60%, 750=4.82%, 1000=0.12% 00:18:29.183 lat (msec) : 50=1.88% 00:18:29.183 cpu : usr=1.40%, sys=2.50%, ctx=851, majf=0, minf=2 00:18:29.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.183 issued rwts: total=339,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.183 00:18:29.183 Run status group 0 (all jobs): 00:18:29.183 READ: bw=1355KiB/s (1387kB/s), 1355KiB/s-1355KiB/s (1387kB/s-1387kB/s), io=1356KiB (1389kB), run=1001-1001msec 00:18:29.183 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:18:29.183 00:18:29.183 Disk stats (read/write): 00:18:29.183 nvme0n1: ios=139/512, merge=0/0, ticks=745/173, in_queue=918, util=92.59% 00:18:29.183 08:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:29.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:29.750 08:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:29.750 08:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:29.750 08:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:29.750 08:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.750 rmmod nvme_tcp 00:18:29.750 rmmod nvme_fabrics 00:18:29.750 rmmod nvme_keyring 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2274394 ']' 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2274394 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2274394 ']' 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2274394 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2274394 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2274394' 00:18:29.750 killing process with pid 2274394 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2274394 00:18:29.750 08:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2274394 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.288 08:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:34.232 00:18:34.232 real 0m14.746s 00:18:34.232 user 0m32.625s 00:18:34.232 sys 0m3.765s 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:34.232 ************************************ 00:18:34.232 END TEST nvmf_nmic 00:18:34.232 ************************************ 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:34.232 ************************************ 00:18:34.232 START TEST nvmf_fio_target 00:18:34.232 ************************************ 00:18:34.232 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:34.492 * Looking for test storage... 00:18:34.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.492 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.493 08:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:37.785 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.785 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:37.786 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:37.786 Found net devices under 0000:84:00.0: cvl_0_0 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:37.786 Found net devices under 0000:84:00.1: cvl_0_1 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.786 08:31:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:18:37.786 00:18:37.786 --- 10.0.0.2 ping statistics --- 00:18:37.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.786 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:18:37.786 00:18:37.786 --- 10.0.0.1 ping statistics --- 00:18:37.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.786 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2277632 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2277632 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2277632 ']' 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.786 08:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.786 [2024-07-23 08:31:50.276259] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:37.786 [2024-07-23 08:31:50.276472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.046 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.046 [2024-07-23 08:31:50.477194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.614 [2024-07-23 08:31:50.971583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.614 [2024-07-23 08:31:50.971662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.614 [2024-07-23 08:31:50.971696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.614 [2024-07-23 08:31:50.971724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.614 [2024-07-23 08:31:50.971749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.614 [2024-07-23 08:31:50.975356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.614 [2024-07-23 08:31:50.975426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.614 [2024-07-23 08:31:50.975492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.614 [2024-07-23 08:31:50.975502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.181 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.181 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:39.181 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.181 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.181 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.181 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.181 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:39.439 [2024-07-23 08:31:51.806618] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.439 08:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:40.373 08:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:40.373 08:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:40.938 08:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:40.938 08:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:41.503 08:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:41.503 08:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:42.438 08:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:42.438 08:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:42.438 08:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.005 08:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:43.005 08:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.571 08:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:43.571 08:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.136 08:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:44.136 08:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:44.702 08:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:44.959 08:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:44.959 08:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.528 08:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:45.528 08:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:46.125 08:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.691 [2024-07-23 08:31:59.149763] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.691 08:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:47.256 08:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:47.514 08:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.448 08:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:48.448 08:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.448 08:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.448 08:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:48.448 08:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:48.448 08:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:50.346 08:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:50.346 08:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:50.346 08:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.346 08:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:50.346 08:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.346 08:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:50.346 08:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:50.346 [global] 00:18:50.346 thread=1 00:18:50.346 invalidate=1 00:18:50.346 rw=write 00:18:50.346 time_based=1 00:18:50.346 runtime=1 00:18:50.346 ioengine=libaio 00:18:50.346 direct=1 00:18:50.346 bs=4096 00:18:50.346 iodepth=1 00:18:50.346 norandommap=0 00:18:50.346 numjobs=1 00:18:50.346 00:18:50.346 verify_dump=1 00:18:50.346 verify_backlog=512 00:18:50.346 verify_state_save=0 00:18:50.346 do_verify=1 00:18:50.346 verify=crc32c-intel 00:18:50.346 [job0] 00:18:50.346 filename=/dev/nvme0n1 00:18:50.346 [job1] 00:18:50.346 filename=/dev/nvme0n2 00:18:50.346 [job2] 00:18:50.346 filename=/dev/nvme0n3 00:18:50.346 [job3] 00:18:50.346 filename=/dev/nvme0n4 00:18:50.346 Could not set queue depth (nvme0n1) 00:18:50.346 Could not set queue depth (nvme0n2) 00:18:50.346 Could not set queue depth (nvme0n3) 00:18:50.346 Could not set queue depth (nvme0n4) 00:18:50.605 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.605 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.605 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.605 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.605 fio-3.35 00:18:50.605 Starting 4 threads 00:18:51.980 00:18:51.980 job0: (groupid=0, jobs=1): err= 0: pid=2279132: Tue Jul 23 08:32:04 2024 00:18:51.980 read: IOPS=44, BW=180KiB/s (184kB/s)(180KiB/1001msec) 00:18:51.980 slat (nsec): min=11715, max=47827, avg=30365.62, stdev=6788.26 00:18:51.980 clat (usec): min=359, max=41426, avg=17649.95, stdev=20235.25 00:18:51.980 lat (usec): min=372, max=41458, avg=17680.31, stdev=20236.31 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[ 359], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 523], 00:18:51.980 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[40633], 00:18:51.980 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:51.980 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:51.980 | 99.99th=[41681] 00:18:51.980 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:51.980 slat (nsec): min=9436, max=62513, avg=26191.74, stdev=9947.61 00:18:51.980 clat (usec): min=251, max=877, avg=366.96, stdev=54.63 00:18:51.980 lat (usec): min=264, max=917, avg=393.15, stdev=58.12 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 310], 20.00th=[ 326], 00:18:51.980 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 375], 00:18:51.980 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 433], 95.00th=[ 453], 00:18:51.980 | 99.00th=[ 490], 99.50th=[ 537], 99.90th=[ 881], 99.95th=[ 881], 00:18:51.980 | 99.99th=[ 881] 00:18:51.980 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.980 lat (usec) : 500=92.64%, 750=3.77%, 1000=0.18% 00:18:51.980 lat (msec) : 50=3.41% 00:18:51.980 cpu : usr=0.60%, sys=2.10%, ctx=558, majf=0, minf=1 00:18:51.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 issued rwts: total=45,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.980 job1: (groupid=0, jobs=1): err= 0: pid=2279160: Tue Jul 23 08:32:04 2024 00:18:51.980 read: IOPS=19, BW=78.1KiB/s (80.0kB/s)(80.0KiB/1024msec) 00:18:51.980 slat (nsec): min=16306, max=32077, avg=28927.60, stdev=3738.43 00:18:51.980 clat (usec): min=40880, max=41397, avg=40976.93, stdev=103.19 00:18:51.980 lat (usec): min=40910, max=41417, avg=41005.85, stdev=100.75 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:51.980 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:51.980 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:51.980 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:51.980 | 99.99th=[41157] 00:18:51.980 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:18:51.980 slat (nsec): min=9827, max=42430, avg=21671.22, stdev=5755.98 00:18:51.980 clat (usec): min=240, max=1603, avg=371.64, stdev=71.50 00:18:51.980 lat (usec): min=251, max=1625, avg=393.31, stdev=72.90 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 330], 00:18:51.980 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 367], 60.00th=[ 375], 00:18:51.980 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 457], 00:18:51.980 | 99.00th=[ 510], 99.50th=[ 586], 99.90th=[ 1598], 99.95th=[ 1598], 00:18:51.980 | 99.99th=[ 1598] 00:18:51.980 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.980 lat (usec) : 250=0.19%, 500=94.74%, 750=1.13% 00:18:51.980 lat (msec) : 2=0.19%, 50=3.76% 00:18:51.980 cpu : usr=1.08%, sys=0.49%, ctx=532, majf=0, minf=1 00:18:51.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.980 job2: (groupid=0, jobs=1): err= 0: pid=2279197: Tue Jul 23 08:32:04 2024 00:18:51.980 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:18:51.980 slat (nsec): min=21537, max=42352, avg=34474.09, stdev=3415.44 00:18:51.980 clat (usec): min=522, max=41246, avg=35700.64, stdev=13912.06 00:18:51.980 lat (usec): min=556, max=41268, avg=35735.12, stdev=13912.03 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[ 523], 5.00th=[ 553], 10.00th=[ 635], 20.00th=[40633], 00:18:51.980 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:51.980 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:51.980 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:51.980 | 99.99th=[41157] 00:18:51.980 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:18:51.980 slat (nsec): min=11245, max=75508, avg=26878.04, stdev=8907.19 00:18:51.980 clat (usec): min=280, max=629, avg=385.03, stdev=48.38 00:18:51.980 lat (usec): min=292, max=656, avg=411.91, stdev=50.75 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[ 293], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 347], 00:18:51.980 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 392], 00:18:51.980 | 70.00th=[ 404], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 482], 00:18:51.980 | 99.00th=[ 515], 99.50th=[ 523], 99.90th=[ 627], 99.95th=[ 627], 00:18:51.980 | 99.99th=[ 627] 00:18:51.980 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.980 lat (usec) : 500=93.08%, 750=3.18% 00:18:51.980 lat (msec) : 50=3.74% 00:18:51.980 cpu : usr=0.58%, sys=1.35%, ctx=537, majf=0, minf=2 00:18:51.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.980 job3: (groupid=0, jobs=1): err= 0: pid=2279209: Tue Jul 23 08:32:04 2024 00:18:51.980 read: IOPS=19, BW=77.1KiB/s (78.9kB/s)(80.0KiB/1038msec) 00:18:51.980 slat (nsec): min=17404, max=39203, avg=33825.65, stdev=5647.13 00:18:51.980 clat (usec): min=40861, max=41620, avg=40982.74, stdev=155.41 00:18:51.980 lat (usec): min=40896, max=41656, avg=41016.56, stdev=155.91 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:51.980 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:51.980 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:51.980 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:51.980 | 99.99th=[41681] 00:18:51.980 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:18:51.980 slat (nsec): min=10488, max=65965, avg=31573.13, stdev=11187.20 00:18:51.980 clat (usec): min=272, max=664, avg=386.06, stdev=49.62 00:18:51.980 lat (usec): min=291, max=706, avg=417.63, stdev=53.44 00:18:51.980 clat percentiles (usec): 00:18:51.980 | 1.00th=[ 289], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 347], 00:18:51.980 | 30.00th=[ 355], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 396], 00:18:51.980 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 469], 00:18:51.980 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 668], 99.95th=[ 668], 00:18:51.980 | 99.99th=[ 668] 00:18:51.980 bw ( KiB/s): min= 4096, max= 4096, per=51.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:51.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:51.980 lat (usec) : 500=94.55%, 750=1.69% 00:18:51.980 lat (msec) : 50=3.76% 00:18:51.980 cpu : usr=1.06%, sys=1.93%, ctx=533, majf=0, minf=1 00:18:51.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.980 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.980 00:18:51.980 Run status group 0 (all jobs): 00:18:51.980 READ: bw=416KiB/s (426kB/s), 77.1KiB/s-180KiB/s (78.9kB/s-184kB/s), io=432KiB (442kB), run=1001-1038msec 00:18:51.980 WRITE: bw=7892KiB/s (8082kB/s), 1973KiB/s-2046KiB/s (2020kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1038msec 00:18:51.980 00:18:51.980 Disk stats (read/write): 00:18:51.980 nvme0n1: ios=64/512, merge=0/0, ticks=1614/176, in_queue=1790, util=96.79% 00:18:51.980 nvme0n2: ios=39/512, merge=0/0, ticks=641/192, in_queue=833, util=86.40% 00:18:51.980 nvme0n3: ios=41/512, merge=0/0, ticks=1560/192, in_queue=1752, util=97.25% 00:18:51.980 nvme0n4: ios=38/512, merge=0/0, ticks=1559/174, in_queue=1733, util=97.22% 00:18:51.980 08:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:51.980 [global] 00:18:51.980 thread=1 00:18:51.980 invalidate=1 00:18:51.980 rw=randwrite 00:18:51.980 time_based=1 00:18:51.980 runtime=1 00:18:51.980 ioengine=libaio 00:18:51.980 direct=1 00:18:51.980 bs=4096 00:18:51.980 iodepth=1 00:18:51.980 norandommap=0 00:18:51.980 numjobs=1 00:18:51.980 00:18:51.980 verify_dump=1 00:18:51.980 verify_backlog=512 00:18:51.980 verify_state_save=0 00:18:51.980 do_verify=1 00:18:51.980 verify=crc32c-intel 00:18:51.980 [job0] 00:18:51.980 filename=/dev/nvme0n1 00:18:51.981 [job1] 00:18:51.981 filename=/dev/nvme0n2 00:18:51.981 [job2] 00:18:51.981 filename=/dev/nvme0n3 00:18:51.981 [job3] 00:18:51.981 filename=/dev/nvme0n4 00:18:51.981 Could not set queue depth (nvme0n1) 00:18:51.981 Could not set queue depth (nvme0n2) 00:18:51.981 Could not set queue depth (nvme0n3) 00:18:51.981 Could not set queue depth (nvme0n4) 00:18:52.239 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.239 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.239 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.239 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.239 fio-3.35 00:18:52.239 Starting 4 threads 00:18:53.614 00:18:53.614 job0: (groupid=0, jobs=1): err= 0: pid=2279466: Tue Jul 23 08:32:05 2024 00:18:53.614 read: IOPS=336, BW=1347KiB/s (1380kB/s)(1400KiB/1039msec) 00:18:53.614 slat (nsec): min=7993, max=48656, avg=20244.32, stdev=4797.39 00:18:53.614 clat (usec): min=376, max=41771, avg=2375.53, stdev=8489.68 00:18:53.614 lat (usec): min=387, max=41796, avg=2395.78, stdev=8490.40 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 416], 00:18:53.614 | 30.00th=[ 429], 40.00th=[ 453], 50.00th=[ 506], 60.00th=[ 537], 00:18:53.614 | 70.00th=[ 562], 80.00th=[ 594], 90.00th=[ 701], 95.00th=[ 1663], 00:18:53.614 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:53.614 | 99.99th=[41681] 00:18:53.614 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:18:53.614 slat (usec): min=14, max=127, avg=25.19, stdev= 6.16 00:18:53.614 clat (usec): min=290, max=539, avg=353.47, stdev=33.53 00:18:53.614 lat (usec): min=313, max=579, avg=378.66, stdev=34.06 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 322], 00:18:53.614 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:18:53.614 | 70.00th=[ 371], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 396], 00:18:53.614 | 99.00th=[ 465], 99.50th=[ 537], 99.90th=[ 537], 99.95th=[ 537], 00:18:53.614 | 99.99th=[ 537] 00:18:53.614 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.614 lat (usec) : 500=78.77%, 750=17.40%, 1000=1.28% 00:18:53.614 lat (msec) : 2=0.70%, 50=1.86% 00:18:53.614 cpu : usr=1.54%, sys=2.41%, ctx=863, majf=0, minf=1 00:18:53.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 issued rwts: total=350,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.614 job1: (groupid=0, jobs=1): err= 0: pid=2279467: Tue Jul 23 08:32:05 2024 00:18:53.614 read: IOPS=112, BW=451KiB/s (461kB/s)(460KiB/1021msec) 00:18:53.614 slat (nsec): min=9018, max=49842, avg=19627.37, stdev=6685.07 00:18:53.614 clat (usec): min=356, max=41681, avg=7050.76, stdev=14916.49 00:18:53.614 lat (usec): min=366, max=41696, avg=7070.38, stdev=14915.74 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 363], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 400], 00:18:53.614 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 445], 60.00th=[ 465], 00:18:53.614 | 70.00th=[ 482], 80.00th=[ 523], 90.00th=[41157], 95.00th=[41157], 00:18:53.614 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:53.614 | 99.99th=[41681] 00:18:53.614 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:18:53.614 slat (nsec): min=11250, max=58072, avg=25190.14, stdev=4184.82 00:18:53.614 clat (usec): min=283, max=575, avg=372.45, stdev=34.14 00:18:53.614 lat (usec): min=306, max=603, avg=397.64, stdev=34.94 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 293], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 351], 00:18:53.614 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 379], 00:18:53.614 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 424], 00:18:53.614 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 578], 99.95th=[ 578], 00:18:53.614 | 99.99th=[ 578] 00:18:53.614 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.614 lat (usec) : 500=95.69%, 750=0.96%, 1000=0.16% 00:18:53.614 lat (msec) : 10=0.16%, 50=3.03% 00:18:53.614 cpu : usr=1.37%, sys=1.67%, ctx=627, majf=0, minf=1 00:18:53.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 issued rwts: total=115,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.614 job2: (groupid=0, jobs=1): err= 0: pid=2279468: Tue Jul 23 08:32:05 2024 00:18:53.614 read: IOPS=46, BW=185KiB/s (190kB/s)(192KiB/1037msec) 00:18:53.614 slat (nsec): min=8914, max=42832, avg=24016.60, stdev=7636.95 00:18:53.614 clat (usec): min=502, max=41480, avg=17435.56, stdev=20136.33 00:18:53.614 lat (usec): min=525, max=41501, avg=17459.58, stdev=20139.59 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 502], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 594], 00:18:53.614 | 30.00th=[ 603], 40.00th=[ 603], 50.00th=[ 611], 60.00th=[40633], 00:18:53.614 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:53.614 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:53.614 | 99.99th=[41681] 00:18:53.614 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:18:53.614 slat (nsec): min=15721, max=41068, avg=24518.89, stdev=2444.03 00:18:53.614 clat (usec): min=297, max=1075, avg=355.99, stdev=40.02 00:18:53.614 lat (usec): min=321, max=1099, avg=380.51, stdev=40.22 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 334], 00:18:53.614 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:18:53.614 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 396], 00:18:53.614 | 99.00th=[ 424], 99.50th=[ 457], 99.90th=[ 1074], 99.95th=[ 1074], 00:18:53.614 | 99.99th=[ 1074] 00:18:53.614 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.614 lat (usec) : 500=91.25%, 750=5.00% 00:18:53.614 lat (msec) : 2=0.18%, 50=3.57% 00:18:53.614 cpu : usr=0.19%, sys=1.74%, ctx=562, majf=0, minf=1 00:18:53.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 issued rwts: total=48,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.614 job3: (groupid=0, jobs=1): err= 0: pid=2279469: Tue Jul 23 08:32:05 2024 00:18:53.614 read: IOPS=300, BW=1200KiB/s (1229kB/s)(1224KiB/1020msec) 00:18:53.614 slat (nsec): min=9000, max=57788, avg=22227.71, stdev=5654.20 00:18:53.614 clat (usec): min=340, max=41482, avg=2623.01, stdev=9037.18 00:18:53.614 lat (usec): min=350, max=41502, avg=2645.24, stdev=9038.77 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 441], 00:18:53.614 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 482], 60.00th=[ 506], 00:18:53.614 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 635], 95.00th=[40633], 00:18:53.614 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:53.614 | 99.99th=[41681] 00:18:53.614 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:18:53.614 slat (nsec): min=16984, max=63021, avg=26400.19, stdev=4678.88 00:18:53.614 clat (usec): min=304, max=549, avg=372.30, stdev=34.24 00:18:53.614 lat (usec): min=326, max=579, avg=398.70, stdev=35.70 00:18:53.614 clat percentiles (usec): 00:18:53.614 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 351], 00:18:53.614 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:18:53.614 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 441], 00:18:53.614 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 553], 00:18:53.614 | 99.99th=[ 553] 00:18:53.614 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.614 lat (usec) : 500=82.52%, 750=14.67%, 1000=0.61% 00:18:53.614 lat (msec) : 2=0.12%, 4=0.12%, 50=1.96% 00:18:53.614 cpu : usr=1.18%, sys=2.45%, ctx=820, majf=0, minf=1 00:18:53.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.614 issued rwts: total=306,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.614 00:18:53.614 Run status group 0 (all jobs): 00:18:53.614 READ: bw=3153KiB/s (3229kB/s), 185KiB/s-1347KiB/s (190kB/s-1380kB/s), io=3276KiB (3355kB), run=1020-1039msec 00:18:53.615 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-2008KiB/s (2018kB/s-2056kB/s), io=8192KiB (8389kB), run=1020-1039msec 00:18:53.615 00:18:53.615 Disk stats (read/write): 00:18:53.615 nvme0n1: ios=399/512, merge=0/0, ticks=824/171, in_queue=995, util=84.43% 00:18:53.615 nvme0n2: ios=131/512, merge=0/0, ticks=933/184, in_queue=1117, util=89.00% 00:18:53.615 nvme0n3: ios=105/512, merge=0/0, ticks=1266/178, in_queue=1444, util=99.30% 00:18:53.615 nvme0n4: ios=358/512, merge=0/0, ticks=947/178, in_queue=1125, util=99.20% 00:18:53.615 08:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:53.615 [global] 00:18:53.615 thread=1 00:18:53.615 invalidate=1 00:18:53.615 rw=write 00:18:53.615 time_based=1 00:18:53.615 runtime=1 00:18:53.615 ioengine=libaio 00:18:53.615 direct=1 00:18:53.615 bs=4096 00:18:53.615 iodepth=128 00:18:53.615 norandommap=0 00:18:53.615 numjobs=1 00:18:53.615 00:18:53.615 verify_dump=1 00:18:53.615 verify_backlog=512 00:18:53.615 verify_state_save=0 00:18:53.615 do_verify=1 00:18:53.615 verify=crc32c-intel 00:18:53.615 [job0] 00:18:53.615 filename=/dev/nvme0n1 00:18:53.615 [job1] 00:18:53.615 filename=/dev/nvme0n2 00:18:53.615 [job2] 00:18:53.615 filename=/dev/nvme0n3 00:18:53.615 [job3] 00:18:53.615 filename=/dev/nvme0n4 00:18:53.615 Could not set queue depth (nvme0n1) 00:18:53.615 Could not set queue depth (nvme0n2) 00:18:53.615 Could not set queue depth (nvme0n3) 00:18:53.615 Could not set queue depth (nvme0n4) 00:18:53.873 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.873 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.873 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.873 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:53.873 fio-3.35 00:18:53.873 Starting 4 threads 00:18:55.247 00:18:55.247 job0: (groupid=0, jobs=1): err= 0: pid=2279692: Tue Jul 23 08:32:07 2024 00:18:55.247 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1020msec) 00:18:55.247 slat (usec): min=4, max=24424, avg=164.58, stdev=1290.49 00:18:55.247 clat (usec): min=10724, max=73142, avg=22077.43, stdev=9572.10 00:18:55.247 lat (usec): min=10741, max=73155, avg=22242.01, stdev=9673.51 00:18:55.247 clat percentiles (usec): 00:18:55.247 | 1.00th=[11469], 5.00th=[13566], 10.00th=[14746], 20.00th=[15664], 00:18:55.247 | 30.00th=[16450], 40.00th=[16909], 50.00th=[19268], 60.00th=[20579], 00:18:55.247 | 70.00th=[22676], 80.00th=[26346], 90.00th=[36963], 95.00th=[47973], 00:18:55.247 | 99.00th=[49546], 99.50th=[50070], 99.90th=[72877], 99.95th=[72877], 00:18:55.247 | 99.99th=[72877] 00:18:55.247 write: IOPS=3190, BW=12.5MiB/s (13.1MB/s)(12.7MiB/1020msec); 0 zone resets 00:18:55.247 slat (usec): min=5, max=22987, avg=131.38, stdev=1088.65 00:18:55.247 clat (usec): min=1545, max=46399, avg=18731.27, stdev=6894.58 00:18:55.247 lat (usec): min=1564, max=46415, avg=18862.64, stdev=6947.66 00:18:55.248 clat percentiles (usec): 00:18:55.248 | 1.00th=[ 8979], 5.00th=[11731], 10.00th=[12780], 20.00th=[14222], 00:18:55.248 | 30.00th=[15139], 40.00th=[15664], 50.00th=[16581], 60.00th=[17695], 00:18:55.248 | 70.00th=[19792], 80.00th=[22938], 90.00th=[26870], 95.00th=[32113], 00:18:55.248 | 99.00th=[45876], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:18:55.248 | 99.99th=[46400] 00:18:55.248 bw ( KiB/s): min= 9904, max=15112, per=34.16%, avg=12508.00, stdev=3682.61, samples=2 00:18:55.248 iops : min= 2476, max= 3778, avg=3127.00, stdev=920.65, samples=2 00:18:55.248 lat (msec) : 2=0.14%, 10=1.19%, 20=63.22%, 50=35.24%, 100=0.22% 00:18:55.248 cpu : usr=5.10%, sys=8.64%, ctx=174, majf=0, minf=11 00:18:55.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:55.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.248 issued rwts: total=3072,3254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.248 job1: (groupid=0, jobs=1): err= 0: pid=2279693: Tue Jul 23 08:32:07 2024 00:18:55.248 read: IOPS=1596, BW=6385KiB/s (6538kB/s)(6404KiB/1003msec) 00:18:55.248 slat (usec): min=5, max=28135, avg=308.64, stdev=1571.01 00:18:55.248 clat (usec): min=1083, max=120613, avg=42375.73, stdev=20379.09 00:18:55.248 lat (msec): min=10, max=120, avg=42.68, stdev=20.42 00:18:55.248 clat percentiles (msec): 00:18:55.248 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 33], 20.00th=[ 34], 00:18:55.248 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 39], 00:18:55.248 | 70.00th=[ 41], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 104], 00:18:55.248 | 99.00th=[ 115], 99.50th=[ 118], 99.90th=[ 122], 99.95th=[ 122], 00:18:55.248 | 99.99th=[ 122] 00:18:55.248 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:18:55.248 slat (usec): min=6, max=21992, avg=234.01, stdev=1349.09 00:18:55.248 clat (usec): min=18271, max=66264, avg=28587.75, stdev=7847.55 00:18:55.248 lat (usec): min=20449, max=88256, avg=28821.76, stdev=7899.55 00:18:55.248 clat percentiles (usec): 00:18:55.248 | 1.00th=[19268], 5.00th=[23462], 10.00th=[23725], 20.00th=[24249], 00:18:55.248 | 30.00th=[24773], 40.00th=[26346], 50.00th=[26870], 60.00th=[27132], 00:18:55.248 | 70.00th=[28181], 80.00th=[30278], 90.00th=[33424], 95.00th=[44303], 00:18:55.248 | 99.00th=[66323], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:18:55.248 | 99.99th=[66323] 00:18:55.248 bw ( KiB/s): min= 7688, max= 8192, per=21.69%, avg=7940.00, stdev=356.38, samples=2 00:18:55.248 iops : min= 1922, max= 2048, avg=1985.00, stdev=89.10, samples=2 00:18:55.248 lat (msec) : 2=0.03%, 20=1.59%, 50=91.64%, 100=4.17%, 250=2.58% 00:18:55.248 cpu : usr=3.89%, sys=5.29%, ctx=148, majf=0, minf=15 00:18:55.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:55.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.248 issued rwts: total=1601,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.248 job2: (groupid=0, jobs=1): err= 0: pid=2279695: Tue Jul 23 08:32:07 2024 00:18:55.248 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:18:55.248 slat (usec): min=3, max=45455, avg=256.61, stdev=2142.53 00:18:55.248 clat (usec): min=957, max=93548, avg=33095.89, stdev=17261.41 00:18:55.248 lat (usec): min=974, max=93585, avg=33352.50, stdev=17458.87 00:18:55.248 clat percentiles (usec): 00:18:55.248 | 1.00th=[ 1418], 5.00th=[10683], 10.00th=[16712], 20.00th=[19006], 00:18:55.248 | 30.00th=[19792], 40.00th=[28181], 50.00th=[30540], 60.00th=[33817], 00:18:55.248 | 70.00th=[38011], 80.00th=[47449], 90.00th=[61604], 95.00th=[64226], 00:18:55.248 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[87557], 00:18:55.248 | 99.99th=[93848] 00:18:55.248 write: IOPS=2377, BW=9510KiB/s (9739kB/s)(9596KiB/1009msec); 0 zone resets 00:18:55.248 slat (usec): min=4, max=29999, avg=171.57, stdev=1507.71 00:18:55.248 clat (usec): min=1145, max=95771, avg=25069.01, stdev=14289.97 00:18:55.248 lat (usec): min=1175, max=95782, avg=25240.57, stdev=14374.76 00:18:55.248 clat percentiles (usec): 00:18:55.248 | 1.00th=[ 2212], 5.00th=[ 7963], 10.00th=[13304], 20.00th=[17695], 00:18:55.248 | 30.00th=[18482], 40.00th=[19006], 50.00th=[21890], 60.00th=[24249], 00:18:55.248 | 70.00th=[25822], 80.00th=[30802], 90.00th=[42730], 95.00th=[58983], 00:18:55.248 | 99.00th=[81265], 99.50th=[81265], 99.90th=[88605], 99.95th=[88605], 00:18:55.248 | 99.99th=[95945] 00:18:55.248 bw ( KiB/s): min= 8776, max= 9392, per=24.81%, avg=9084.00, stdev=435.58, samples=2 00:18:55.248 iops : min= 2194, max= 2348, avg=2271.00, stdev=108.89, samples=2 00:18:55.248 lat (usec) : 1000=0.04% 00:18:55.248 lat (msec) : 2=2.34%, 4=0.02%, 10=3.31%, 20=33.87%, 50=49.54% 00:18:55.248 lat (msec) : 100=10.88% 00:18:55.248 cpu : usr=2.98%, sys=3.57%, ctx=120, majf=0, minf=11 00:18:55.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:55.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.248 issued rwts: total=2048,2399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.248 job3: (groupid=0, jobs=1): err= 0: pid=2279696: Tue Jul 23 08:32:07 2024 00:18:55.248 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:18:55.248 slat (usec): min=4, max=23668, avg=321.31, stdev=1604.62 00:18:55.248 clat (msec): min=23, max=121, avg=42.76, stdev=17.89 00:18:55.248 lat (msec): min=23, max=121, avg=43.08, stdev=18.04 00:18:55.248 clat percentiles (msec): 00:18:55.248 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:18:55.248 | 30.00th=[ 33], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 39], 00:18:55.248 | 70.00th=[ 41], 80.00th=[ 45], 90.00th=[ 68], 95.00th=[ 90], 00:18:55.248 | 99.00th=[ 102], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 122], 00:18:55.248 | 99.99th=[ 122] 00:18:55.248 write: IOPS=1630, BW=6520KiB/s (6677kB/s)(6540KiB/1003msec); 0 zone resets 00:18:55.248 slat (usec): min=8, max=7076, avg=280.92, stdev=890.50 00:18:55.248 clat (usec): min=879, max=55049, avg=37135.91, stdev=8701.14 00:18:55.248 lat (usec): min=7085, max=55113, avg=37416.83, stdev=8711.58 00:18:55.248 clat percentiles (usec): 00:18:55.248 | 1.00th=[ 8094], 5.00th=[16909], 10.00th=[31065], 20.00th=[33162], 00:18:55.248 | 30.00th=[33424], 40.00th=[34866], 50.00th=[36963], 60.00th=[37487], 00:18:55.248 | 70.00th=[41157], 80.00th=[43779], 90.00th=[47973], 95.00th=[50070], 00:18:55.248 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:18:55.248 | 99.99th=[54789] 00:18:55.248 bw ( KiB/s): min= 6096, max= 6248, per=16.86%, avg=6172.00, stdev=107.48, samples=2 00:18:55.248 iops : min= 1524, max= 1562, avg=1543.00, stdev=26.87, samples=2 00:18:55.248 lat (usec) : 1000=0.03% 00:18:55.248 lat (msec) : 10=1.29%, 20=1.45%, 50=85.34%, 100=11.04%, 250=0.85% 00:18:55.248 cpu : usr=2.99%, sys=7.49%, ctx=283, majf=0, minf=13 00:18:55.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:55.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.248 issued rwts: total=1536,1635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.248 00:18:55.248 Run status group 0 (all jobs): 00:18:55.248 READ: bw=31.6MiB/s (33.2MB/s), 6126KiB/s-11.8MiB/s (6273kB/s-12.3MB/s), io=32.3MiB (33.8MB), run=1003-1020msec 00:18:55.248 WRITE: bw=35.8MiB/s (37.5MB/s), 6520KiB/s-12.5MiB/s (6677kB/s-13.1MB/s), io=36.5MiB (38.2MB), run=1003-1020msec 00:18:55.248 00:18:55.248 Disk stats (read/write): 00:18:55.248 nvme0n1: ios=2612/2675, merge=0/0, ticks=44956/35173, in_queue=80129, util=95.89% 00:18:55.248 nvme0n2: ios=1422/1536, merge=0/0, ticks=15181/10819, in_queue=26000, util=96.52% 00:18:55.248 nvme0n3: ios=1808/2048, merge=0/0, ticks=36799/32687, in_queue=69486, util=96.17% 00:18:55.248 nvme0n4: ios=1082/1495, merge=0/0, ticks=15342/18605, in_queue=33947, util=95.91% 00:18:55.248 08:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:55.248 [global] 00:18:55.248 thread=1 00:18:55.248 invalidate=1 00:18:55.248 rw=randwrite 00:18:55.248 time_based=1 00:18:55.248 runtime=1 00:18:55.248 ioengine=libaio 00:18:55.248 direct=1 00:18:55.248 bs=4096 00:18:55.248 iodepth=128 00:18:55.248 norandommap=0 00:18:55.248 numjobs=1 00:18:55.248 00:18:55.248 verify_dump=1 00:18:55.248 verify_backlog=512 00:18:55.248 verify_state_save=0 00:18:55.248 do_verify=1 00:18:55.248 verify=crc32c-intel 00:18:55.248 [job0] 00:18:55.248 filename=/dev/nvme0n1 00:18:55.248 [job1] 00:18:55.248 filename=/dev/nvme0n2 00:18:55.248 [job2] 00:18:55.248 filename=/dev/nvme0n3 00:18:55.248 [job3] 00:18:55.248 filename=/dev/nvme0n4 00:18:55.248 Could not set queue depth (nvme0n1) 00:18:55.248 Could not set queue depth (nvme0n2) 00:18:55.248 Could not set queue depth (nvme0n3) 00:18:55.248 Could not set queue depth (nvme0n4) 00:18:55.248 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.248 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.248 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.248 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.248 fio-3.35 00:18:55.248 Starting 4 threads 00:18:56.624 00:18:56.624 job0: (groupid=0, jobs=1): err= 0: pid=2279937: Tue Jul 23 08:32:09 2024 00:18:56.624 read: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec) 00:18:56.624 slat (usec): min=5, max=20820, avg=239.38, stdev=1426.74 00:18:56.624 clat (usec): min=13907, max=75322, avg=27375.79, stdev=11965.14 00:18:56.624 lat (usec): min=13924, max=75353, avg=27615.17, stdev=12078.94 00:18:56.624 clat percentiles (usec): 00:18:56.624 | 1.00th=[15533], 5.00th=[19006], 10.00th=[19530], 20.00th=[20055], 00:18:56.624 | 30.00th=[21890], 40.00th=[22152], 50.00th=[24511], 60.00th=[25035], 00:18:56.624 | 70.00th=[25560], 80.00th=[27395], 90.00th=[44827], 95.00th=[59507], 00:18:56.624 | 99.00th=[71828], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:18:56.624 | 99.99th=[74974] 00:18:56.624 write: IOPS=1713, BW=6854KiB/s (7018kB/s)(7032KiB/1026msec); 0 zone resets 00:18:56.624 slat (usec): min=7, max=21707, avg=346.81, stdev=1560.28 00:18:56.624 clat (msec): min=6, max=114, avg=49.99, stdev=27.56 00:18:56.624 lat (msec): min=6, max=114, avg=50.34, stdev=27.76 00:18:56.624 clat percentiles (msec): 00:18:56.624 | 1.00th=[ 10], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 23], 00:18:56.624 | 30.00th=[ 26], 40.00th=[ 37], 50.00th=[ 52], 60.00th=[ 54], 00:18:56.624 | 70.00th=[ 61], 80.00th=[ 77], 90.00th=[ 93], 95.00th=[ 105], 00:18:56.624 | 99.00th=[ 114], 99.50th=[ 114], 99.90th=[ 115], 99.95th=[ 115], 00:18:56.624 | 99.99th=[ 115] 00:18:56.624 bw ( KiB/s): min= 4848, max= 8175, per=19.48%, avg=6511.50, stdev=2352.54, samples=2 00:18:56.624 iops : min= 1212, max= 2043, avg=1627.50, stdev=587.61, samples=2 00:18:56.624 lat (msec) : 10=0.79%, 20=11.69%, 50=55.65%, 100=28.81%, 250=3.07% 00:18:56.624 cpu : usr=3.02%, sys=5.27%, ctx=188, majf=0, minf=1 00:18:56.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:18:56.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.624 issued rwts: total=1536,1758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.624 job1: (groupid=0, jobs=1): err= 0: pid=2279946: Tue Jul 23 08:32:09 2024 00:18:56.624 read: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(11.9MiB/1015msec) 00:18:56.624 slat (usec): min=5, max=33114, avg=163.15, stdev=1173.64 00:18:56.624 clat (usec): min=7619, max=96156, avg=21067.38, stdev=13552.52 00:18:56.624 lat (usec): min=10083, max=96193, avg=21230.53, stdev=13671.67 00:18:56.624 clat percentiles (usec): 00:18:56.624 | 1.00th=[10552], 5.00th=[11863], 10.00th=[13435], 20.00th=[15008], 00:18:56.624 | 30.00th=[15270], 40.00th=[15533], 50.00th=[16319], 60.00th=[16909], 00:18:56.624 | 70.00th=[18744], 80.00th=[21103], 90.00th=[38011], 95.00th=[58983], 00:18:56.624 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[91751], 00:18:56.624 | 99.99th=[95945] 00:18:56.624 write: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec); 0 zone resets 00:18:56.624 slat (usec): min=6, max=25920, avg=149.55, stdev=997.86 00:18:56.624 clat (usec): min=9332, max=54482, avg=20706.05, stdev=8881.18 00:18:56.624 lat (usec): min=9377, max=58668, avg=20855.60, stdev=8949.01 00:18:56.624 clat percentiles (usec): 00:18:56.624 | 1.00th=[10159], 5.00th=[11600], 10.00th=[13829], 20.00th=[15401], 00:18:56.624 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16319], 60.00th=[19530], 00:18:56.624 | 70.00th=[20579], 80.00th=[26084], 90.00th=[34866], 95.00th=[41157], 00:18:56.624 | 99.00th=[50070], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:18:56.624 | 99.99th=[54264] 00:18:56.624 bw ( KiB/s): min= 8712, max=15864, per=36.76%, avg=12288.00, stdev=5057.23, samples=2 00:18:56.624 iops : min= 2178, max= 3966, avg=3072.00, stdev=1264.31, samples=2 00:18:56.624 lat (msec) : 10=0.41%, 20=68.79%, 50=27.39%, 100=3.41% 00:18:56.624 cpu : usr=5.62%, sys=8.09%, ctx=230, majf=0, minf=1 00:18:56.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:56.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.624 issued rwts: total=3058,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.624 job2: (groupid=0, jobs=1): err= 0: pid=2279986: Tue Jul 23 08:32:09 2024 00:18:56.624 read: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec) 00:18:56.624 slat (usec): min=4, max=25391, avg=281.37, stdev=1747.84 00:18:56.624 clat (usec): min=19477, max=64810, avg=33701.13, stdev=7820.83 00:18:56.624 lat (usec): min=19489, max=64844, avg=33982.50, stdev=7989.71 00:18:56.624 clat percentiles (usec): 00:18:56.624 | 1.00th=[20055], 5.00th=[25560], 10.00th=[25822], 20.00th=[27657], 00:18:56.624 | 30.00th=[27657], 40.00th=[29754], 50.00th=[30802], 60.00th=[34341], 00:18:56.624 | 70.00th=[39060], 80.00th=[40109], 90.00th=[44303], 95.00th=[45876], 00:18:56.624 | 99.00th=[57410], 99.50th=[64226], 99.90th=[64226], 99.95th=[64750], 00:18:56.624 | 99.99th=[64750] 00:18:56.624 write: IOPS=1427, BW=5711KiB/s (5849kB/s)(5820KiB/1019msec); 0 zone resets 00:18:56.624 slat (usec): min=6, max=40501, avg=479.35, stdev=2467.44 00:18:56.624 clat (msec): min=13, max=134, avg=64.68, stdev=27.37 00:18:56.624 lat (msec): min=18, max=134, avg=65.16, stdev=27.55 00:18:56.624 clat percentiles (msec): 00:18:56.624 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 47], 00:18:56.624 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 59], 00:18:56.624 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 111], 95.00th=[ 123], 00:18:56.624 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 136], 99.95th=[ 136], 00:18:56.624 | 99.99th=[ 136] 00:18:56.624 bw ( KiB/s): min= 4224, max= 6379, per=15.86%, avg=5301.50, stdev=1523.82, samples=2 00:18:56.624 iops : min= 1056, max= 1594, avg=1325.00, stdev=380.42, samples=2 00:18:56.624 lat (msec) : 20=0.89%, 50=57.68%, 100=32.84%, 250=8.59% 00:18:56.624 cpu : usr=2.65%, sys=3.54%, ctx=154, majf=0, minf=1 00:18:56.624 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:18:56.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.624 issued rwts: total=1024,1455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.624 job3: (groupid=0, jobs=1): err= 0: pid=2279998: Tue Jul 23 08:32:09 2024 00:18:56.624 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:18:56.624 slat (usec): min=6, max=27033, avg=215.77, stdev=1466.61 00:18:56.624 clat (usec): min=10817, max=81073, avg=27489.27, stdev=15656.21 00:18:56.624 lat (usec): min=11097, max=81128, avg=27705.04, stdev=15774.23 00:18:56.624 clat percentiles (usec): 00:18:56.624 | 1.00th=[12780], 5.00th=[16057], 10.00th=[16909], 20.00th=[17433], 00:18:56.624 | 30.00th=[17695], 40.00th=[18744], 50.00th=[20055], 60.00th=[21627], 00:18:56.624 | 70.00th=[23200], 80.00th=[39060], 90.00th=[56361], 95.00th=[64226], 00:18:56.624 | 99.00th=[67634], 99.50th=[69731], 99.90th=[76022], 99.95th=[78119], 00:18:56.624 | 99.99th=[81265] 00:18:56.624 write: IOPS=2285, BW=9142KiB/s (9361kB/s)(9160KiB/1002msec); 0 zone resets 00:18:56.624 slat (usec): min=5, max=17534, avg=228.64, stdev=1141.29 00:18:56.624 clat (usec): min=965, max=113180, avg=30506.12, stdev=21519.24 00:18:56.624 lat (msec): min=7, max=113, avg=30.73, stdev=21.67 00:18:56.624 clat percentiles (msec): 00:18:56.624 | 1.00th=[ 9], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 17], 00:18:56.624 | 30.00th=[ 18], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 22], 00:18:56.624 | 70.00th=[ 39], 80.00th=[ 52], 90.00th=[ 58], 95.00th=[ 85], 00:18:56.624 | 99.00th=[ 99], 99.50th=[ 110], 99.90th=[ 113], 99.95th=[ 113], 00:18:56.624 | 99.99th=[ 113] 00:18:56.624 bw ( KiB/s): min= 8192, max= 9112, per=25.88%, avg=8652.00, stdev=650.54, samples=2 00:18:56.624 iops : min= 2048, max= 2278, avg=2163.00, stdev=162.63, samples=2 00:18:56.624 lat (usec) : 1000=0.02% 00:18:56.624 lat (msec) : 10=1.45%, 20=48.32%, 50=32.53%, 100=17.17%, 250=0.51% 00:18:56.624 cpu : usr=4.70%, sys=6.29%, ctx=190, majf=0, minf=1 00:18:56.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:56.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.625 issued rwts: total=2048,2290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.625 00:18:56.625 Run status group 0 (all jobs): 00:18:56.625 READ: bw=29.2MiB/s (30.6MB/s), 4020KiB/s-11.8MiB/s (4116kB/s-12.3MB/s), io=29.9MiB (31.4MB), run=1002-1026msec 00:18:56.625 WRITE: bw=32.6MiB/s (34.2MB/s), 5711KiB/s-11.8MiB/s (5849kB/s-12.4MB/s), io=33.5MiB (35.1MB), run=1002-1026msec 00:18:56.625 00:18:56.625 Disk stats (read/write): 00:18:56.625 nvme0n1: ios=1078/1407, merge=0/0, ticks=27822/70560, in_queue=98382, util=98.40% 00:18:56.625 nvme0n2: ios=2605/3033, merge=0/0, ticks=19319/23651, in_queue=42970, util=98.47% 00:18:56.625 nvme0n3: ios=1024/1103, merge=0/0, ticks=17162/32418, in_queue=49580, util=87.73% 00:18:56.625 nvme0n4: ios=1411/1536, merge=0/0, ticks=18030/22600, in_queue=40630, util=98.58% 00:18:56.625 08:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:56.625 08:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2280089 00:18:56.625 08:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:56.625 08:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:56.625 [global] 00:18:56.625 thread=1 00:18:56.625 invalidate=1 00:18:56.625 rw=read 00:18:56.625 time_based=1 00:18:56.625 runtime=10 00:18:56.625 ioengine=libaio 00:18:56.625 direct=1 00:18:56.625 bs=4096 00:18:56.625 iodepth=1 00:18:56.625 norandommap=1 00:18:56.625 numjobs=1 00:18:56.625 00:18:56.625 [job0] 00:18:56.625 filename=/dev/nvme0n1 00:18:56.625 [job1] 00:18:56.625 filename=/dev/nvme0n2 00:18:56.625 [job2] 00:18:56.625 filename=/dev/nvme0n3 00:18:56.625 [job3] 00:18:56.625 filename=/dev/nvme0n4 00:18:56.625 Could not set queue depth (nvme0n1) 00:18:56.625 Could not set queue depth (nvme0n2) 00:18:56.625 Could not set queue depth (nvme0n3) 00:18:56.625 Could not set queue depth (nvme0n4) 00:18:56.884 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.884 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.884 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.884 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.884 fio-3.35 00:18:56.884 Starting 4 threads 00:19:00.166 08:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:00.424 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=13856768, buflen=4096 00:19:00.424 fio: pid=2280278, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:00.424 08:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:00.682 08:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.682 08:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:00.682 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=7258112, buflen=4096 00:19:00.682 fio: pid=2280277, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:01.250 08:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.250 08:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:01.250 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=15204352, buflen=4096 00:19:01.250 fio: pid=2280272, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:01.846 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=19685376, buflen=4096 00:19:01.846 fio: pid=2280273, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:01.846 00:19:01.846 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2280272: Tue Jul 23 08:32:14 2024 00:19:01.846 read: IOPS=884, BW=3536KiB/s (3621kB/s)(14.5MiB/4199msec) 00:19:01.846 slat (usec): min=7, max=15852, avg=30.36, stdev=355.64 00:19:01.846 clat (usec): min=300, max=117737, avg=1088.96, stdev=5097.82 00:19:01.846 lat (usec): min=309, max=117754, avg=1119.32, stdev=5174.71 00:19:01.846 clat percentiles (usec): 00:19:01.846 | 1.00th=[ 343], 5.00th=[ 375], 10.00th=[ 388], 20.00th=[ 412], 00:19:01.846 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 494], 60.00th=[ 529], 00:19:01.846 | 70.00th=[ 570], 80.00th=[ 594], 90.00th=[ 619], 95.00th=[ 644], 00:19:01.846 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41681], 99.95th=[ 41681], 00:19:01.846 | 99.99th=[117965] 00:19:01.846 bw ( KiB/s): min= 90, max= 7776, per=32.06%, avg=3709.25, stdev=3100.52, samples=8 00:19:01.846 iops : min= 22, max= 1944, avg=927.25, stdev=775.21, samples=8 00:19:01.846 lat (usec) : 500=51.74%, 750=46.30%, 1000=0.48% 00:19:01.846 lat (msec) : 2=0.03%, 4=0.03%, 50=1.37%, 250=0.03% 00:19:01.846 cpu : usr=1.26%, sys=2.69%, ctx=3717, majf=0, minf=1 00:19:01.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.846 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.846 issued rwts: total=3713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.846 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2280273: Tue Jul 23 08:32:14 2024 00:19:01.846 read: IOPS=1016, BW=4067KiB/s (4164kB/s)(18.8MiB/4727msec) 00:19:01.846 slat (usec): min=6, max=30876, avg=32.62, stdev=496.21 00:19:01.846 clat (usec): min=303, max=41627, avg=939.26, stdev=4122.76 00:19:01.846 lat (usec): min=310, max=72003, avg=970.44, stdev=4213.24 00:19:01.846 clat percentiles (usec): 00:19:01.846 | 1.00th=[ 330], 5.00th=[ 379], 10.00th=[ 404], 20.00th=[ 433], 00:19:01.846 | 30.00th=[ 465], 40.00th=[ 490], 50.00th=[ 510], 60.00th=[ 537], 00:19:01.846 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 611], 95.00th=[ 660], 00:19:01.846 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:01.846 | 99.99th=[41681] 00:19:01.846 bw ( KiB/s): min= 181, max= 8224, per=36.87%, avg=4266.33, stdev=3283.32, samples=9 00:19:01.846 iops : min= 45, max= 2056, avg=1066.56, stdev=820.87, samples=9 00:19:01.846 lat (usec) : 500=44.60%, 750=52.24%, 1000=1.89% 00:19:01.846 lat (msec) : 2=0.10%, 4=0.08%, 50=1.06% 00:19:01.846 cpu : usr=1.31%, sys=3.45%, ctx=4812, majf=0, minf=1 00:19:01.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.846 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.846 issued rwts: total=4807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.847 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2280277: Tue Jul 23 08:32:14 2024 00:19:01.847 read: IOPS=500, BW=2002KiB/s (2050kB/s)(7088KiB/3540msec) 00:19:01.847 slat (nsec): min=7992, max=89138, avg=24719.89, stdev=8295.22 00:19:01.847 clat (usec): min=338, max=43863, avg=1953.30, stdev=7392.85 00:19:01.847 lat (usec): min=348, max=43898, avg=1978.02, stdev=7393.66 00:19:01.847 clat percentiles (usec): 00:19:01.847 | 1.00th=[ 392], 5.00th=[ 429], 10.00th=[ 445], 20.00th=[ 482], 00:19:01.847 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 562], 00:19:01.847 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 709], 95.00th=[ 922], 00:19:01.847 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[43779], 00:19:01.847 | 99.99th=[43779] 00:19:01.847 bw ( KiB/s): min= 96, max= 6824, per=17.48%, avg=2022.86, stdev=3090.94, samples=7 00:19:01.847 iops : min= 24, max= 1706, avg=505.71, stdev=772.73, samples=7 00:19:01.847 lat (usec) : 500=26.40%, 750=64.69%, 1000=5.36% 00:19:01.847 lat (msec) : 10=0.06%, 50=3.44% 00:19:01.847 cpu : usr=0.59%, sys=2.06%, ctx=1773, majf=0, minf=1 00:19:01.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.847 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.847 issued rwts: total=1773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.847 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2280278: Tue Jul 23 08:32:14 2024 00:19:01.847 read: IOPS=1071, BW=4284KiB/s (4386kB/s)(13.2MiB/3159msec) 00:19:01.847 slat (nsec): min=8942, max=68548, avg=20588.17, stdev=5985.54 00:19:01.847 clat (usec): min=328, max=41531, avg=901.19, stdev=4105.01 00:19:01.847 lat (usec): min=338, max=41557, avg=921.78, stdev=4105.52 00:19:01.847 clat percentiles (usec): 00:19:01.847 | 1.00th=[ 359], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 424], 00:19:01.847 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 482], 00:19:01.847 | 70.00th=[ 494], 80.00th=[ 529], 90.00th=[ 586], 95.00th=[ 619], 00:19:01.847 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:01.847 | 99.99th=[41681] 00:19:01.847 bw ( KiB/s): min= 2120, max= 7856, per=35.94%, avg=4158.67, stdev=2074.88, samples=6 00:19:01.847 iops : min= 530, max= 1964, avg=1039.67, stdev=518.72, samples=6 00:19:01.847 lat (usec) : 500=72.58%, 750=25.95%, 1000=0.12% 00:19:01.847 lat (msec) : 2=0.27%, 4=0.03%, 50=1.03% 00:19:01.847 cpu : usr=0.85%, sys=3.32%, ctx=3384, majf=0, minf=1 00:19:01.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.847 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.847 issued rwts: total=3384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.847 00:19:01.847 Run status group 0 (all jobs): 00:19:01.847 READ: bw=11.3MiB/s (11.8MB/s), 2002KiB/s-4284KiB/s (2050kB/s-4386kB/s), io=53.4MiB (56.0MB), run=3159-4727msec 00:19:01.847 00:19:01.847 Disk stats (read/write): 00:19:01.847 nvme0n1: ios=3710/0, merge=0/0, ticks=3798/0, in_queue=3798, util=94.41% 00:19:01.847 nvme0n2: ios=4801/0, merge=0/0, ticks=4260/0, in_queue=4260, util=95.22% 00:19:01.847 nvme0n3: ios=1773/0, merge=0/0, ticks=3462/0, in_queue=3462, util=96.60% 00:19:01.847 nvme0n4: ios=3182/0, merge=0/0, ticks=2910/0, in_queue=2910, util=96.96% 00:19:01.847 08:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.847 08:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:02.789 08:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.789 08:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:03.355 08:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:03.355 08:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:03.922 08:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:03.922 08:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:04.488 08:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:04.488 08:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:05.422 08:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:05.422 08:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2280089 00:19:05.422 08:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:05.422 08:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:06.357 nvmf hotplug test: fio failed as expected 00:19:06.357 08:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.923 rmmod nvme_tcp 00:19:06.923 rmmod nvme_fabrics 00:19:06.923 rmmod nvme_keyring 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2277632 ']' 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2277632 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2277632 ']' 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2277632 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.923 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2277632 00:19:07.182 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:07.182 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:07.182 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2277632' 00:19:07.182 killing process with pid 2277632 00:19:07.182 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2277632 00:19:07.182 08:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2277632 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.087 08:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:11.630 00:19:11.630 real 0m36.899s 00:19:11.630 user 2m13.530s 00:19:11.630 sys 0m9.334s 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.630 ************************************ 00:19:11.630 END TEST nvmf_fio_target 00:19:11.630 ************************************ 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:11.630 ************************************ 00:19:11.630 START TEST nvmf_bdevio 00:19:11.630 ************************************ 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:11.630 * Looking for test storage... 00:19:11.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.630 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.631 08:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:14.929 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:14.929 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:14.929 Found net devices under 0000:84:00.0: cvl_0_0 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:14.929 Found net devices under 0000:84:00.1: cvl_0_1 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.929 08:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.929 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.929 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:19:14.930 00:19:14.930 --- 10.0.0.2 ping statistics --- 00:19:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.930 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:19:14.930 00:19:14.930 --- 10.0.0.1 ping statistics --- 00:19:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.930 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2283602 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2283602 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2283602 ']' 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.930 08:32:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.930 [2024-07-23 08:32:27.301328] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:14.930 [2024-07-23 08:32:27.301653] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.190 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.190 [2024-07-23 08:32:27.644277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.760 [2024-07-23 08:32:28.185194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.760 [2024-07-23 08:32:28.185357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.760 [2024-07-23 08:32:28.185425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.760 [2024-07-23 08:32:28.185471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.760 [2024-07-23 08:32:28.185519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.760 [2024-07-23 08:32:28.185766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:15.760 [2024-07-23 08:32:28.185879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:15.760 [2024-07-23 08:32:28.185961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.760 [2024-07-23 08:32:28.185985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:16.733 08:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.733 08:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:16.733 08:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.733 08:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.733 08:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.733 [2024-07-23 08:32:29.036787] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.733 Malloc0 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.733 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.993 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.993 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.993 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.993 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.993 [2024-07-23 08:32:29.264432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.993 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.993 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.994 { 00:19:16.994 "params": { 00:19:16.994 "name": "Nvme$subsystem", 00:19:16.994 "trtype": "$TEST_TRANSPORT", 00:19:16.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.994 "adrfam": "ipv4", 00:19:16.994 "trsvcid": "$NVMF_PORT", 00:19:16.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.994 "hdgst": ${hdgst:-false}, 00:19:16.994 "ddgst": ${ddgst:-false} 00:19:16.994 }, 00:19:16.994 "method": "bdev_nvme_attach_controller" 00:19:16.994 } 00:19:16.994 EOF 00:19:16.994 )") 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:16.994 08:32:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:16.994 "params": { 00:19:16.994 "name": "Nvme1", 00:19:16.994 "trtype": "tcp", 00:19:16.994 "traddr": "10.0.0.2", 00:19:16.994 "adrfam": "ipv4", 00:19:16.994 "trsvcid": "4420", 00:19:16.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.994 "hdgst": false, 00:19:16.994 "ddgst": false 00:19:16.994 }, 00:19:16.994 "method": "bdev_nvme_attach_controller" 00:19:16.994 }' 00:19:16.994 [2024-07-23 08:32:29.435621] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:16.994 [2024-07-23 08:32:29.435945] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283886 ] 00:19:17.253 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.253 [2024-07-23 08:32:29.729569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:17.822 [2024-07-23 08:32:30.203749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.822 [2024-07-23 08:32:30.203800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.822 [2024-07-23 08:32:30.203812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.389 I/O targets: 00:19:18.389 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:18.389 00:19:18.389 00:19:18.389 CUnit - A unit testing framework for C - Version 2.1-3 00:19:18.389 http://cunit.sourceforge.net/ 00:19:18.389 00:19:18.389 00:19:18.389 Suite: bdevio tests on: Nvme1n1 00:19:18.389 Test: blockdev write read block ...passed 00:19:18.675 Test: blockdev write zeroes read block ...passed 00:19:18.675 Test: blockdev write zeroes read no split ...passed 00:19:18.675 Test: blockdev write zeroes read split ...passed 00:19:18.675 Test: blockdev write zeroes read split partial ...passed 00:19:18.675 Test: blockdev reset ...[2024-07-23 08:32:31.081568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.675 [2024-07-23 08:32:31.081953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:19:18.938 [2024-07-23 08:32:31.190463] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:18.938 passed 00:19:18.939 Test: blockdev write read 8 blocks ...passed 00:19:18.939 Test: blockdev write read size > 128k ...passed 00:19:18.939 Test: blockdev write read invalid size ...passed 00:19:18.939 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:18.939 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:18.939 Test: blockdev write read max offset ...passed 00:19:18.939 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:18.939 Test: blockdev writev readv 8 blocks ...passed 00:19:18.939 Test: blockdev writev readv 30 x 1block ...passed 00:19:18.939 Test: blockdev writev readv block ...passed 00:19:18.939 Test: blockdev writev readv size > 128k ...passed 00:19:18.939 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:18.939 Test: blockdev comparev and writev ...[2024-07-23 08:32:31.419560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.419723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.939 [2024-07-23 08:32:31.419822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.419889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.939 [2024-07-23 08:32:31.420959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.421046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.939 [2024-07-23 08:32:31.421130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.421192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.939 [2024-07-23 08:32:31.422270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.422385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.939 [2024-07-23 08:32:31.422435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.422471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.939 [2024-07-23 08:32:31.423488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.423540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.939 [2024-07-23 08:32:31.423588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.939 [2024-07-23 08:32:31.423663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:19.198 passed 00:19:19.198 Test: blockdev nvme passthru rw ...passed 00:19:19.198 Test: blockdev nvme passthru vendor specific ...[2024-07-23 08:32:31.506894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.198 [2024-07-23 08:32:31.507020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:19.198 [2024-07-23 08:32:31.507538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.198 [2024-07-23 08:32:31.507632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:19.198 [2024-07-23 08:32:31.508251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.198 [2024-07-23 08:32:31.508349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:19.198 [2024-07-23 08:32:31.508728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.198 [2024-07-23 08:32:31.508808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:19.198 passed 00:19:19.198 Test: blockdev nvme admin passthru ...passed 00:19:19.198 Test: blockdev copy ...passed 00:19:19.198 00:19:19.198 Run Summary: Type Total Ran Passed Failed Inactive 00:19:19.198 suites 1 1 n/a 0 0 00:19:19.198 tests 23 23 23 0 0 00:19:19.198 asserts 152 152 152 0 n/a 00:19:19.198 00:19:19.198 Elapsed time = 1.499 seconds 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.105 rmmod nvme_tcp 00:19:21.105 rmmod nvme_fabrics 00:19:21.105 rmmod nvme_keyring 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2283602 ']' 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2283602 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2283602 ']' 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2283602 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:21.105 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.106 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2283602 00:19:21.106 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:21.106 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:21.106 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2283602' 00:19:21.106 killing process with pid 2283602 00:19:21.106 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2283602 00:19:21.106 08:32:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2283602 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.403 08:32:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:26.309 00:19:26.309 real 0m14.904s 00:19:26.309 user 0m37.198s 00:19:26.309 sys 0m4.258s 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 ************************************ 00:19:26.309 END TEST nvmf_bdevio 00:19:26.309 ************************************ 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:26.309 00:19:26.309 real 5m46.871s 00:19:26.309 user 14m59.536s 00:19:26.309 sys 1m38.041s 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 ************************************ 00:19:26.309 END TEST nvmf_target_core 00:19:26.309 ************************************ 00:19:26.309 08:32:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:26.309 08:32:38 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:19:26.309 08:32:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:26.309 08:32:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.309 08:32:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 ************************************ 00:19:26.309 START TEST nvmf_target_extra 00:19:26.309 ************************************ 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:19:26.309 * Looking for test storage... 00:19:26.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.309 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.310 ************************************ 00:19:26.310 START TEST nvmf_example 00:19:26.310 ************************************ 00:19:26.310 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:26.310 * Looking for test storage... 00:19:26.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.569 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.570 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.861 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:29.861 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:29.861 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.861 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:29.862 Found net devices under 0000:84:00.0: cvl_0_0 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:29.862 Found net devices under 0000:84:00.1: cvl_0_1 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:19:29.862 00:19:29.862 --- 10.0.0.2 ping statistics --- 00:19:29.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.862 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:19:29.862 00:19:29.862 --- 10.0.0.1 ping statistics --- 00:19:29.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.862 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2286798 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2286798 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2286798 ']' 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.862 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:30.122 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.495 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:31.496 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:31.753 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.719 Initializing NVMe Controllers 00:19:41.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:41.720 Initialization complete. Launching workers. 00:19:41.720 ======================================================== 00:19:41.720 Latency(us) 00:19:41.720 Device Information : IOPS MiB/s Average min max 00:19:41.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9940.47 38.83 6440.19 1625.21 21752.91 00:19:41.720 ======================================================== 00:19:41.720 Total : 9940.47 38.83 6440.19 1625.21 21752.91 00:19:41.720 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.978 rmmod nvme_tcp 00:19:41.978 rmmod nvme_fabrics 00:19:41.978 rmmod nvme_keyring 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2286798 ']' 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2286798 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2286798 ']' 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2286798 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2286798 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2286798' 00:19:41.978 killing process with pid 2286798 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 2286798 00:19:41.978 08:32:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 2286798 00:19:43.930 nvmf threads initialize successfully 00:19:43.930 bdev subsystem init successfully 00:19:43.930 created a nvmf target service 00:19:43.930 create targets's poll groups done 00:19:43.930 all subsystems of target started 00:19:43.930 nvmf target is running 00:19:43.930 all subsystems of target stopped 00:19:43.930 destroy targets's poll groups done 00:19:43.930 destroyed the nvmf target service 00:19:43.930 bdev subsystem finish successfully 00:19:43.930 nvmf threads destroy successfully 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.930 08:32:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.838 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:46.099 00:19:46.099 real 0m19.624s 00:19:46.099 user 0m51.968s 00:19:46.099 sys 0m4.669s 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:46.099 ************************************ 00:19:46.099 END TEST nvmf_example 00:19:46.099 ************************************ 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:46.099 ************************************ 00:19:46.099 START TEST nvmf_filesystem 00:19:46.099 ************************************ 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:46.099 * Looking for test storage... 00:19:46.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:19:46.099 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:19:46.100 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:46.100 #define SPDK_CONFIG_H 00:19:46.100 #define SPDK_CONFIG_APPS 1 00:19:46.100 #define SPDK_CONFIG_ARCH native 00:19:46.100 #define SPDK_CONFIG_ASAN 1 00:19:46.100 #undef SPDK_CONFIG_AVAHI 00:19:46.100 #undef SPDK_CONFIG_CET 00:19:46.100 #define SPDK_CONFIG_COVERAGE 1 00:19:46.100 #define SPDK_CONFIG_CROSS_PREFIX 00:19:46.100 #undef SPDK_CONFIG_CRYPTO 00:19:46.100 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:46.100 #undef SPDK_CONFIG_CUSTOMOCF 00:19:46.100 #undef SPDK_CONFIG_DAOS 00:19:46.100 #define SPDK_CONFIG_DAOS_DIR 00:19:46.100 #define SPDK_CONFIG_DEBUG 1 00:19:46.100 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:46.100 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:19:46.100 #define SPDK_CONFIG_DPDK_INC_DIR 00:19:46.100 #define SPDK_CONFIG_DPDK_LIB_DIR 00:19:46.100 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:46.100 #undef SPDK_CONFIG_DPDK_UADK 00:19:46.100 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:19:46.100 #define SPDK_CONFIG_EXAMPLES 1 00:19:46.100 #undef SPDK_CONFIG_FC 00:19:46.100 #define SPDK_CONFIG_FC_PATH 00:19:46.100 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:46.100 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:46.100 #undef SPDK_CONFIG_FUSE 00:19:46.100 #undef SPDK_CONFIG_FUZZER 00:19:46.100 #define SPDK_CONFIG_FUZZER_LIB 00:19:46.100 #undef SPDK_CONFIG_GOLANG 00:19:46.100 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:46.100 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:46.100 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:46.100 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:46.100 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:46.100 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:46.100 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:46.100 #define SPDK_CONFIG_IDXD 1 00:19:46.100 #define SPDK_CONFIG_IDXD_KERNEL 1 00:19:46.100 #undef SPDK_CONFIG_IPSEC_MB 00:19:46.100 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:46.100 #define SPDK_CONFIG_ISAL 1 00:19:46.100 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:46.100 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:46.100 #define SPDK_CONFIG_LIBDIR 00:19:46.100 #undef SPDK_CONFIG_LTO 00:19:46.100 #define SPDK_CONFIG_MAX_LCORES 128 00:19:46.100 #define SPDK_CONFIG_NVME_CUSE 1 00:19:46.100 #undef SPDK_CONFIG_OCF 00:19:46.100 #define SPDK_CONFIG_OCF_PATH 00:19:46.100 #define SPDK_CONFIG_OPENSSL_PATH 00:19:46.100 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:46.100 #define SPDK_CONFIG_PGO_DIR 00:19:46.100 #undef SPDK_CONFIG_PGO_USE 00:19:46.100 #define SPDK_CONFIG_PREFIX /usr/local 00:19:46.100 #undef SPDK_CONFIG_RAID5F 00:19:46.100 #undef SPDK_CONFIG_RBD 00:19:46.100 #define SPDK_CONFIG_RDMA 1 00:19:46.100 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:46.101 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:46.101 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:46.101 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:46.101 #define SPDK_CONFIG_SHARED 1 00:19:46.101 #undef SPDK_CONFIG_SMA 00:19:46.101 #define SPDK_CONFIG_TESTS 1 00:19:46.101 #undef SPDK_CONFIG_TSAN 00:19:46.101 #define SPDK_CONFIG_UBLK 1 00:19:46.101 #define SPDK_CONFIG_UBSAN 1 00:19:46.101 #undef SPDK_CONFIG_UNIT_TESTS 00:19:46.101 #undef SPDK_CONFIG_URING 00:19:46.101 #define SPDK_CONFIG_URING_PATH 00:19:46.101 #undef SPDK_CONFIG_URING_ZNS 00:19:46.101 #undef SPDK_CONFIG_USDT 00:19:46.101 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:46.101 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:46.101 #undef SPDK_CONFIG_VFIO_USER 00:19:46.101 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:46.101 #define SPDK_CONFIG_VHOST 1 00:19:46.101 #define SPDK_CONFIG_VIRTIO 1 00:19:46.101 #undef SPDK_CONFIG_VTUNE 00:19:46.101 #define SPDK_CONFIG_VTUNE_DIR 00:19:46.101 #define SPDK_CONFIG_WERROR 1 00:19:46.101 #define SPDK_CONFIG_WPDK_DIR 00:19:46.101 #undef SPDK_CONFIG_XNVME 00:19:46.101 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:19:46.101 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:19:46.102 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:19:46.363 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:19:46.363 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:19:46.363 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:46.364 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2288624 ]] 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2288624 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.nQuLQc 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.nQuLQc/tests/target /tmp/spdk.nQuLQc 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=38889459712 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083287552 00:19:46.365 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6193827840 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22530387968 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541643776 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=8994226176 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016659968 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22433792 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22541049856 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541643776 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=593920 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:19:46.366 * Looking for test storage... 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=38889459712 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8408420352 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.366 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:19:46.367 08:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.659 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:49.660 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:49.660 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:49.660 Found net devices under 0000:84:00.0: cvl_0_0 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:49.660 Found net devices under 0000:84:00.1: cvl_0_1 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.660 08:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:19:49.660 00:19:49.660 --- 10.0.0.2 ping statistics --- 00:19:49.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.660 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:19:49.660 00:19:49.660 --- 10.0.0.1 ping statistics --- 00:19:49.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.660 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.660 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:49.920 ************************************ 00:19:49.920 START TEST nvmf_filesystem_no_in_capsule 00:19:49.920 ************************************ 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2290475 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2290475 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2290475 ']' 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.920 08:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.179 [2024-07-23 08:33:02.443949] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:50.179 [2024-07-23 08:33:02.444254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.179 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.438 [2024-07-23 08:33:02.750327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.006 [2024-07-23 08:33:03.243824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.006 [2024-07-23 08:33:03.243946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.006 [2024-07-23 08:33:03.244007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.006 [2024-07-23 08:33:03.244054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.006 [2024-07-23 08:33:03.244099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.006 [2024-07-23 08:33:03.244342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.006 [2024-07-23 08:33:03.244395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.006 [2024-07-23 08:33:03.244447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.006 [2024-07-23 08:33:03.244463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.571 08:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.571 08:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:19:51.571 08:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.571 08:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.571 08:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:51.571 [2024-07-23 08:33:04.022442] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.571 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:52.506 Malloc1 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:52.506 [2024-07-23 08:33:04.802060] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:52.506 { 00:19:52.506 "name": "Malloc1", 00:19:52.506 "aliases": [ 00:19:52.506 "b02eeb4f-aeef-4b00-852e-e7904738600c" 00:19:52.506 ], 00:19:52.506 "product_name": "Malloc disk", 00:19:52.506 "block_size": 512, 00:19:52.506 "num_blocks": 1048576, 00:19:52.506 "uuid": "b02eeb4f-aeef-4b00-852e-e7904738600c", 00:19:52.506 "assigned_rate_limits": { 00:19:52.506 "rw_ios_per_sec": 0, 00:19:52.506 "rw_mbytes_per_sec": 0, 00:19:52.506 "r_mbytes_per_sec": 0, 00:19:52.506 "w_mbytes_per_sec": 0 00:19:52.506 }, 00:19:52.506 "claimed": true, 00:19:52.506 "claim_type": "exclusive_write", 00:19:52.506 "zoned": false, 00:19:52.506 "supported_io_types": { 00:19:52.506 "read": true, 00:19:52.506 "write": true, 00:19:52.506 "unmap": true, 00:19:52.506 "flush": true, 00:19:52.506 "reset": true, 00:19:52.506 "nvme_admin": false, 00:19:52.506 "nvme_io": false, 00:19:52.506 "nvme_io_md": false, 00:19:52.506 "write_zeroes": true, 00:19:52.506 "zcopy": true, 00:19:52.506 "get_zone_info": false, 00:19:52.506 "zone_management": false, 00:19:52.506 "zone_append": false, 00:19:52.506 "compare": false, 00:19:52.506 "compare_and_write": false, 00:19:52.506 "abort": true, 00:19:52.506 "seek_hole": false, 00:19:52.506 "seek_data": false, 00:19:52.506 "copy": true, 00:19:52.506 "nvme_iov_md": false 00:19:52.506 }, 00:19:52.506 "memory_domains": [ 00:19:52.506 { 00:19:52.506 "dma_device_id": "system", 00:19:52.506 "dma_device_type": 1 00:19:52.506 }, 00:19:52.506 { 00:19:52.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.506 "dma_device_type": 2 00:19:52.506 } 00:19:52.506 ], 00:19:52.506 "driver_specific": {} 00:19:52.506 } 00:19:52.506 ]' 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:19:52.506 08:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:53.073 08:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:19:53.073 08:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:19:53.073 08:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:53.073 08:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:53.073 08:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:19:55.601 08:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:19:56.167 08:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:57.100 ************************************ 00:19:57.100 START TEST filesystem_ext4 00:19:57.100 ************************************ 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:19:57.100 08:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:19:57.100 mke2fs 1.46.5 (30-Dec-2021) 00:19:57.359 Discarding device blocks: 0/522240 done 00:19:57.359 Creating filesystem with 522240 1k blocks and 130560 inodes 00:19:57.359 Filesystem UUID: 23b27397-6669-4795-967e-6635566b2214 00:19:57.359 Superblock backups stored on blocks: 00:19:57.359 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:19:57.359 00:19:57.359 Allocating group tables: 0/64 done 00:19:57.359 Writing inode tables: 0/64 done 00:19:59.894 Creating journal (8192 blocks): done 00:19:59.894 Writing superblocks and filesystem accounting information: 0/64 done 00:19:59.894 00:19:59.894 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:19:59.894 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2290475 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:00.152 00:20:00.152 real 0m3.139s 00:20:00.152 user 0m0.028s 00:20:00.152 sys 0m0.067s 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:20:00.152 ************************************ 00:20:00.152 END TEST filesystem_ext4 00:20:00.152 ************************************ 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.152 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:00.420 ************************************ 00:20:00.420 START TEST filesystem_btrfs 00:20:00.420 ************************************ 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:20:00.420 08:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:20:00.718 btrfs-progs v6.6.2 00:20:00.718 See https://btrfs.readthedocs.io for more information. 00:20:00.718 00:20:00.718 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:20:00.718 NOTE: several default settings have changed in version 5.15, please make sure 00:20:00.718 this does not affect your deployments: 00:20:00.718 - DUP for metadata (-m dup) 00:20:00.718 - enabled no-holes (-O no-holes) 00:20:00.718 - enabled free-space-tree (-R free-space-tree) 00:20:00.718 00:20:00.718 Label: (null) 00:20:00.718 UUID: a126a684-c3e1-455a-b0aa-a21eedfe01a8 00:20:00.718 Node size: 16384 00:20:00.718 Sector size: 4096 00:20:00.718 Filesystem size: 510.00MiB 00:20:00.718 Block group profiles: 00:20:00.718 Data: single 8.00MiB 00:20:00.718 Metadata: DUP 32.00MiB 00:20:00.718 System: DUP 8.00MiB 00:20:00.718 SSD detected: yes 00:20:00.718 Zoned device: no 00:20:00.718 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:20:00.718 Runtime features: free-space-tree 00:20:00.718 Checksum: crc32c 00:20:00.718 Number of devices: 1 00:20:00.718 Devices: 00:20:00.718 ID SIZE PATH 00:20:00.718 1 510.00MiB /dev/nvme0n1p1 00:20:00.718 00:20:00.718 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:20:00.718 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2290475 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:00.983 00:20:00.983 real 0m0.696s 00:20:00.983 user 0m0.029s 00:20:00.983 sys 0m0.133s 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:20:00.983 ************************************ 00:20:00.983 END TEST filesystem_btrfs 00:20:00.983 ************************************ 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:00.983 ************************************ 00:20:00.983 START TEST filesystem_xfs 00:20:00.983 ************************************ 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:20:00.983 08:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:20:01.241 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:20:01.241 = sectsz=512 attr=2, projid32bit=1 00:20:01.241 = crc=1 finobt=1, sparse=1, rmapbt=0 00:20:01.241 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:20:01.241 data = bsize=4096 blocks=130560, imaxpct=25 00:20:01.241 = sunit=0 swidth=0 blks 00:20:01.241 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:20:01.241 log =internal log bsize=4096 blocks=16384, version=2 00:20:01.241 = sectsz=512 sunit=0 blks, lazy-count=1 00:20:01.241 realtime =none extsz=4096 blocks=0, rtextents=0 00:20:02.173 Discarding blocks...Done. 00:20:02.173 08:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:20:02.173 08:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2290475 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:04.700 00:20:04.700 real 0m3.433s 00:20:04.700 user 0m0.025s 00:20:04.700 sys 0m0.084s 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:20:04.700 ************************************ 00:20:04.700 END TEST filesystem_xfs 00:20:04.700 ************************************ 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:20:04.700 08:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:04.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2290475 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2290475 ']' 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2290475 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2290475 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2290475' 00:20:04.700 killing process with pid 2290475 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2290475 00:20:04.700 08:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2290475 00:20:08.887 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:20:08.887 00:20:08.887 real 0m18.691s 00:20:08.887 user 1m7.599s 00:20:08.887 sys 0m2.530s 00:20:08.887 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.887 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:08.887 ************************************ 00:20:08.887 END TEST nvmf_filesystem_no_in_capsule 00:20:08.887 ************************************ 00:20:08.887 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:20:08.887 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:20:08.887 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:08.888 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.888 08:33:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:20:08.888 ************************************ 00:20:08.888 START TEST nvmf_filesystem_in_capsule 00:20:08.888 ************************************ 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2293334 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2293334 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2293334 ']' 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.888 08:33:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:08.888 [2024-07-23 08:33:21.186797] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:08.888 [2024-07-23 08:33:21.187035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.888 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.147 [2024-07-23 08:33:21.457953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.715 [2024-07-23 08:33:21.960212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.715 [2024-07-23 08:33:21.960360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.715 [2024-07-23 08:33:21.960426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.715 [2024-07-23 08:33:21.960473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.715 [2024-07-23 08:33:21.960519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.715 [2024-07-23 08:33:21.960755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.715 [2024-07-23 08:33:21.960832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.715 [2024-07-23 08:33:21.960891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.715 [2024-07-23 08:33:21.960902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:10.282 [2024-07-23 08:33:22.784519] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.282 08:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:11.216 Malloc1 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:11.216 [2024-07-23 08:33:23.563801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:11.216 { 00:20:11.216 "name": "Malloc1", 00:20:11.216 "aliases": [ 00:20:11.216 "b62d7544-15e6-4151-b50d-8c28979e7655" 00:20:11.216 ], 00:20:11.216 "product_name": "Malloc disk", 00:20:11.216 "block_size": 512, 00:20:11.216 "num_blocks": 1048576, 00:20:11.216 "uuid": "b62d7544-15e6-4151-b50d-8c28979e7655", 00:20:11.216 "assigned_rate_limits": { 00:20:11.216 "rw_ios_per_sec": 0, 00:20:11.216 "rw_mbytes_per_sec": 0, 00:20:11.216 "r_mbytes_per_sec": 0, 00:20:11.216 "w_mbytes_per_sec": 0 00:20:11.216 }, 00:20:11.216 "claimed": true, 00:20:11.216 "claim_type": "exclusive_write", 00:20:11.216 "zoned": false, 00:20:11.216 "supported_io_types": { 00:20:11.216 "read": true, 00:20:11.216 "write": true, 00:20:11.216 "unmap": true, 00:20:11.216 "flush": true, 00:20:11.216 "reset": true, 00:20:11.216 "nvme_admin": false, 00:20:11.216 "nvme_io": false, 00:20:11.216 "nvme_io_md": false, 00:20:11.216 "write_zeroes": true, 00:20:11.216 "zcopy": true, 00:20:11.216 "get_zone_info": false, 00:20:11.216 "zone_management": false, 00:20:11.216 "zone_append": false, 00:20:11.216 "compare": false, 00:20:11.216 "compare_and_write": false, 00:20:11.216 "abort": true, 00:20:11.216 "seek_hole": false, 00:20:11.216 "seek_data": false, 00:20:11.216 "copy": true, 00:20:11.216 "nvme_iov_md": false 00:20:11.216 }, 00:20:11.216 "memory_domains": [ 00:20:11.216 { 00:20:11.216 "dma_device_id": "system", 00:20:11.216 "dma_device_type": 1 00:20:11.216 }, 00:20:11.216 { 00:20:11.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.216 "dma_device_type": 2 00:20:11.216 } 00:20:11.216 ], 00:20:11.216 "driver_specific": {} 00:20:11.216 } 00:20:11.216 ]' 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:20:11.216 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:11.474 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:20:11.474 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:20:11.474 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:20:11.474 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:20:11.474 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:12.042 08:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:20:12.042 08:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:20:12.042 08:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:12.042 08:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:12.042 08:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:20:13.941 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:20:14.199 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:20:14.456 08:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:20:15.388 08:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:16.323 ************************************ 00:20:16.323 START TEST filesystem_in_capsule_ext4 00:20:16.323 ************************************ 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:20:16.323 08:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:20:16.323 mke2fs 1.46.5 (30-Dec-2021) 00:20:16.581 Discarding device blocks: 0/522240 done 00:20:16.581 Creating filesystem with 522240 1k blocks and 130560 inodes 00:20:16.581 Filesystem UUID: c86c2c1d-14f1-4968-9644-c67c3df11183 00:20:16.581 Superblock backups stored on blocks: 00:20:16.581 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:20:16.581 00:20:16.581 Allocating group tables: 0/64 done 00:20:16.581 Writing inode tables: 0/64 done 00:20:16.581 Creating journal (8192 blocks): done 00:20:16.839 Writing superblocks and filesystem accounting information: 0/64 done 00:20:16.839 00:20:16.839 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:20:16.839 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:17.404 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:17.404 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:20:17.404 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:17.404 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2293334 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:17.663 00:20:17.663 real 0m1.259s 00:20:17.663 user 0m0.026s 00:20:17.663 sys 0m0.067s 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.663 08:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:20:17.663 ************************************ 00:20:17.663 END TEST filesystem_in_capsule_ext4 00:20:17.663 ************************************ 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:17.663 ************************************ 00:20:17.663 START TEST filesystem_in_capsule_btrfs 00:20:17.663 ************************************ 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:20:17.663 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:20:17.941 btrfs-progs v6.6.2 00:20:17.941 See https://btrfs.readthedocs.io for more information. 00:20:17.941 00:20:17.941 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:20:17.941 NOTE: several default settings have changed in version 5.15, please make sure 00:20:17.941 this does not affect your deployments: 00:20:17.941 - DUP for metadata (-m dup) 00:20:17.941 - enabled no-holes (-O no-holes) 00:20:17.941 - enabled free-space-tree (-R free-space-tree) 00:20:17.941 00:20:17.941 Label: (null) 00:20:17.941 UUID: 4d43c73b-a749-49fd-b819-bd7052adb9c7 00:20:17.941 Node size: 16384 00:20:17.941 Sector size: 4096 00:20:17.941 Filesystem size: 510.00MiB 00:20:17.941 Block group profiles: 00:20:17.941 Data: single 8.00MiB 00:20:17.941 Metadata: DUP 32.00MiB 00:20:17.941 System: DUP 8.00MiB 00:20:17.941 SSD detected: yes 00:20:17.941 Zoned device: no 00:20:17.941 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:20:17.941 Runtime features: free-space-tree 00:20:17.941 Checksum: crc32c 00:20:17.941 Number of devices: 1 00:20:17.941 Devices: 00:20:17.941 ID SIZE PATH 00:20:17.941 1 510.00MiB /dev/nvme0n1p1 00:20:17.941 00:20:17.941 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:20:17.941 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2293334 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:18.237 00:20:18.237 real 0m0.652s 00:20:18.237 user 0m0.037s 00:20:18.237 sys 0m0.132s 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:20:18.237 ************************************ 00:20:18.237 END TEST filesystem_in_capsule_btrfs 00:20:18.237 ************************************ 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.237 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:18.495 ************************************ 00:20:18.495 START TEST filesystem_in_capsule_xfs 00:20:18.495 ************************************ 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:20:18.495 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:20:18.495 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:20:18.495 = sectsz=512 attr=2, projid32bit=1 00:20:18.495 = crc=1 finobt=1, sparse=1, rmapbt=0 00:20:18.495 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:20:18.495 data = bsize=4096 blocks=130560, imaxpct=25 00:20:18.495 = sunit=0 swidth=0 blks 00:20:18.495 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:20:18.495 log =internal log bsize=4096 blocks=16384, version=2 00:20:18.495 = sectsz=512 sunit=0 blks, lazy-count=1 00:20:18.495 realtime =none extsz=4096 blocks=0, rtextents=0 00:20:19.429 Discarding blocks...Done. 00:20:19.429 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:20:19.429 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:21.957 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:21.957 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:20:21.957 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:21.957 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:20:21.957 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:20:21.957 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:21.957 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2293334 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:21.958 00:20:21.958 real 0m3.493s 00:20:21.958 user 0m0.018s 00:20:21.958 sys 0m0.089s 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:20:21.958 ************************************ 00:20:21.958 END TEST filesystem_in_capsule_xfs 00:20:21.958 ************************************ 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:20:21.958 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:22.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:22.215 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:22.215 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:20:22.215 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2293334 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2293334 ']' 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2293334 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2293334 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2293334' 00:20:22.216 killing process with pid 2293334 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2293334 00:20:22.216 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2293334 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:20:26.404 00:20:26.404 real 0m17.410s 00:20:26.404 user 1m2.856s 00:20:26.404 sys 0m2.479s 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:26.404 ************************************ 00:20:26.404 END TEST nvmf_filesystem_in_capsule 00:20:26.404 ************************************ 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.404 rmmod nvme_tcp 00:20:26.404 rmmod nvme_fabrics 00:20:26.404 rmmod nvme_keyring 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.404 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.312 00:20:28.312 real 0m42.142s 00:20:28.312 user 2m11.729s 00:20:28.312 sys 0m7.797s 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:20:28.312 ************************************ 00:20:28.312 END TEST nvmf_filesystem 00:20:28.312 ************************************ 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.312 ************************************ 00:20:28.312 START TEST nvmf_target_discovery 00:20:28.312 ************************************ 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:20:28.312 * Looking for test storage... 00:20:28.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.312 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.313 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.606 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:31.607 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:31.607 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:31.607 Found net devices under 0000:84:00.0: cvl_0_0 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:31.607 Found net devices under 0000:84:00.1: cvl_0_1 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.607 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.607 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.607 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.607 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:31.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:20:31.608 00:20:31.608 --- 10.0.0.2 ping statistics --- 00:20:31.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.608 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:20:31.608 00:20:31.608 --- 10.0.0.1 ping statistics --- 00:20:31.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.608 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2297415 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2297415 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2297415 ']' 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.608 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:31.867 [2024-07-23 08:33:44.327369] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:31.867 [2024-07-23 08:33:44.327687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.126 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.387 [2024-07-23 08:33:44.648452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.646 [2024-07-23 08:33:45.146488] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.646 [2024-07-23 08:33:45.146595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.646 [2024-07-23 08:33:45.146659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.646 [2024-07-23 08:33:45.146707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.646 [2024-07-23 08:33:45.146754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.646 [2024-07-23 08:33:45.146965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.646 [2024-07-23 08:33:45.147042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.646 [2024-07-23 08:33:45.147093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.646 [2024-07-23 08:33:45.147106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.583 [2024-07-23 08:33:45.861275] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.583 Null1 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:20:33.583 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 [2024-07-23 08:33:45.909791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 Null2 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 Null3 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 Null4 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.584 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:20:33.843 00:20:33.843 Discovery Log Number of Records 6, Generation counter 6 00:20:33.843 =====Discovery Log Entry 0====== 00:20:33.843 trtype: tcp 00:20:33.843 adrfam: ipv4 00:20:33.843 subtype: current discovery subsystem 00:20:33.843 treq: not required 00:20:33.843 portid: 0 00:20:33.843 trsvcid: 4420 00:20:33.843 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:33.843 traddr: 10.0.0.2 00:20:33.843 eflags: explicit discovery connections, duplicate discovery information 00:20:33.843 sectype: none 00:20:33.843 =====Discovery Log Entry 1====== 00:20:33.843 trtype: tcp 00:20:33.843 adrfam: ipv4 00:20:33.843 subtype: nvme subsystem 00:20:33.843 treq: not required 00:20:33.843 portid: 0 00:20:33.843 trsvcid: 4420 00:20:33.843 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:33.843 traddr: 10.0.0.2 00:20:33.843 eflags: none 00:20:33.843 sectype: none 00:20:33.843 =====Discovery Log Entry 2====== 00:20:33.843 trtype: tcp 00:20:33.843 adrfam: ipv4 00:20:33.843 subtype: nvme subsystem 00:20:33.843 treq: not required 00:20:33.843 portid: 0 00:20:33.843 trsvcid: 4420 00:20:33.843 subnqn: nqn.2016-06.io.spdk:cnode2 00:20:33.843 traddr: 10.0.0.2 00:20:33.843 eflags: none 00:20:33.843 sectype: none 00:20:33.844 =====Discovery Log Entry 3====== 00:20:33.844 trtype: tcp 00:20:33.844 adrfam: ipv4 00:20:33.844 subtype: nvme subsystem 00:20:33.844 treq: not required 00:20:33.844 portid: 0 00:20:33.844 trsvcid: 4420 00:20:33.844 subnqn: nqn.2016-06.io.spdk:cnode3 00:20:33.844 traddr: 10.0.0.2 00:20:33.844 eflags: none 00:20:33.844 sectype: none 00:20:33.844 =====Discovery Log Entry 4====== 00:20:33.844 trtype: tcp 00:20:33.844 adrfam: ipv4 00:20:33.844 subtype: nvme subsystem 00:20:33.844 treq: not required 00:20:33.844 portid: 0 00:20:33.844 trsvcid: 4420 00:20:33.844 subnqn: nqn.2016-06.io.spdk:cnode4 00:20:33.844 traddr: 10.0.0.2 00:20:33.844 eflags: none 00:20:33.844 sectype: none 00:20:33.844 =====Discovery Log Entry 5====== 00:20:33.844 trtype: tcp 00:20:33.844 adrfam: ipv4 00:20:33.844 subtype: discovery subsystem referral 00:20:33.844 treq: not required 00:20:33.844 portid: 0 00:20:33.844 trsvcid: 4430 00:20:33.844 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:33.844 traddr: 10.0.0.2 00:20:33.844 eflags: none 00:20:33.844 sectype: none 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:20:33.844 Perform nvmf subsystem discovery via RPC 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.844 [ 00:20:33.844 { 00:20:33.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:33.844 "subtype": "Discovery", 00:20:33.844 "listen_addresses": [ 00:20:33.844 { 00:20:33.844 "trtype": "TCP", 00:20:33.844 "adrfam": "IPv4", 00:20:33.844 "traddr": "10.0.0.2", 00:20:33.844 "trsvcid": "4420" 00:20:33.844 } 00:20:33.844 ], 00:20:33.844 "allow_any_host": true, 00:20:33.844 "hosts": [] 00:20:33.844 }, 00:20:33.844 { 00:20:33.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.844 "subtype": "NVMe", 00:20:33.844 "listen_addresses": [ 00:20:33.844 { 00:20:33.844 "trtype": "TCP", 00:20:33.844 "adrfam": "IPv4", 00:20:33.844 "traddr": "10.0.0.2", 00:20:33.844 "trsvcid": "4420" 00:20:33.844 } 00:20:33.844 ], 00:20:33.844 "allow_any_host": true, 00:20:33.844 "hosts": [], 00:20:33.844 "serial_number": "SPDK00000000000001", 00:20:33.844 "model_number": "SPDK bdev Controller", 00:20:33.844 "max_namespaces": 32, 00:20:33.844 "min_cntlid": 1, 00:20:33.844 "max_cntlid": 65519, 00:20:33.844 "namespaces": [ 00:20:33.844 { 00:20:33.844 "nsid": 1, 00:20:33.844 "bdev_name": "Null1", 00:20:33.844 "name": "Null1", 00:20:33.844 "nguid": "D03C2CD9C68B43D4831040F1C64881E7", 00:20:33.844 "uuid": "d03c2cd9-c68b-43d4-8310-40f1c64881e7" 00:20:33.844 } 00:20:33.844 ] 00:20:33.844 }, 00:20:33.844 { 00:20:33.844 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:33.844 "subtype": "NVMe", 00:20:33.844 "listen_addresses": [ 00:20:33.844 { 00:20:33.844 "trtype": "TCP", 00:20:33.844 "adrfam": "IPv4", 00:20:33.844 "traddr": "10.0.0.2", 00:20:33.844 "trsvcid": "4420" 00:20:33.844 } 00:20:33.844 ], 00:20:33.844 "allow_any_host": true, 00:20:33.844 "hosts": [], 00:20:33.844 "serial_number": "SPDK00000000000002", 00:20:33.844 "model_number": "SPDK bdev Controller", 00:20:33.844 "max_namespaces": 32, 00:20:33.844 "min_cntlid": 1, 00:20:33.844 "max_cntlid": 65519, 00:20:33.844 "namespaces": [ 00:20:33.844 { 00:20:33.844 "nsid": 1, 00:20:33.844 "bdev_name": "Null2", 00:20:33.844 "name": "Null2", 00:20:33.844 "nguid": "A87DEA2F0D9D43F3AFBFF7D651302C4D", 00:20:33.844 "uuid": "a87dea2f-0d9d-43f3-afbf-f7d651302c4d" 00:20:33.844 } 00:20:33.844 ] 00:20:33.844 }, 00:20:33.844 { 00:20:33.844 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:20:33.844 "subtype": "NVMe", 00:20:33.844 "listen_addresses": [ 00:20:33.844 { 00:20:33.844 "trtype": "TCP", 00:20:33.844 "adrfam": "IPv4", 00:20:33.844 "traddr": "10.0.0.2", 00:20:33.844 "trsvcid": "4420" 00:20:33.844 } 00:20:33.844 ], 00:20:33.844 "allow_any_host": true, 00:20:33.844 "hosts": [], 00:20:33.844 "serial_number": "SPDK00000000000003", 00:20:33.844 "model_number": "SPDK bdev Controller", 00:20:33.844 "max_namespaces": 32, 00:20:33.844 "min_cntlid": 1, 00:20:33.844 "max_cntlid": 65519, 00:20:33.844 "namespaces": [ 00:20:33.844 { 00:20:33.844 "nsid": 1, 00:20:33.844 "bdev_name": "Null3", 00:20:33.844 "name": "Null3", 00:20:33.844 "nguid": "61607B5CC8EF4BA5B36DD06231A92095", 00:20:33.844 "uuid": "61607b5c-c8ef-4ba5-b36d-d06231a92095" 00:20:33.844 } 00:20:33.844 ] 00:20:33.844 }, 00:20:33.844 { 00:20:33.844 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:20:33.844 "subtype": "NVMe", 00:20:33.844 "listen_addresses": [ 00:20:33.844 { 00:20:33.844 "trtype": "TCP", 00:20:33.844 "adrfam": "IPv4", 00:20:33.844 "traddr": "10.0.0.2", 00:20:33.844 "trsvcid": "4420" 00:20:33.844 } 00:20:33.844 ], 00:20:33.844 "allow_any_host": true, 00:20:33.844 "hosts": [], 00:20:33.844 "serial_number": "SPDK00000000000004", 00:20:33.844 "model_number": "SPDK bdev Controller", 00:20:33.844 "max_namespaces": 32, 00:20:33.844 "min_cntlid": 1, 00:20:33.844 "max_cntlid": 65519, 00:20:33.844 "namespaces": [ 00:20:33.844 { 00:20:33.844 "nsid": 1, 00:20:33.844 "bdev_name": "Null4", 00:20:33.844 "name": "Null4", 00:20:33.844 "nguid": "25F7DE3982C8496F8686CB59A4D0C0D3", 00:20:33.844 "uuid": "25f7de39-82c8-496f-8686-cb59a4d0c0d3" 00:20:33.844 } 00:20:33.844 ] 00:20:33.844 } 00:20:33.844 ] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.844 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.845 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.845 rmmod nvme_tcp 00:20:34.103 rmmod nvme_fabrics 00:20:34.103 rmmod nvme_keyring 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2297415 ']' 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2297415 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2297415 ']' 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2297415 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2297415 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2297415' 00:20:34.103 killing process with pid 2297415 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2297415 00:20:34.103 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2297415 00:20:36.008 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:36.008 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:36.008 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:36.009 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.009 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.009 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.009 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.009 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.583 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:38.583 00:20:38.583 real 0m9.883s 00:20:38.583 user 0m12.344s 00:20:38.583 sys 0m3.397s 00:20:38.583 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:38.583 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.583 ************************************ 00:20:38.584 END TEST nvmf_target_discovery 00:20:38.584 ************************************ 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.584 ************************************ 00:20:38.584 START TEST nvmf_referrals 00:20:38.584 ************************************ 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:20:38.584 * Looking for test storage... 00:20:38.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:20:38.584 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:20:41.885 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:41.886 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:41.886 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:41.886 Found net devices under 0000:84:00.0: cvl_0_0 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:41.886 Found net devices under 0000:84:00.1: cvl_0_1 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:41.886 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:41.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:20:41.886 00:20:41.886 --- 10.0.0.2 ping statistics --- 00:20:41.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.886 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:41.886 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:20:41.886 00:20:41.886 --- 10.0.0.1 ping statistics --- 00:20:41.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.887 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2299967 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2299967 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2299967 ']' 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.887 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:42.147 [2024-07-23 08:33:54.408456] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:42.147 [2024-07-23 08:33:54.408789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.147 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.407 [2024-07-23 08:33:54.729520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.974 [2024-07-23 08:33:55.206183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.974 [2024-07-23 08:33:55.206326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.974 [2024-07-23 08:33:55.206404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.974 [2024-07-23 08:33:55.206432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.974 [2024-07-23 08:33:55.206458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.974 [2024-07-23 08:33:55.206600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.974 [2024-07-23 08:33:55.206668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.974 [2024-07-23 08:33:55.206720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.974 [2024-07-23 08:33:55.206733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.540 [2024-07-23 08:33:55.965356] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.540 [2024-07-23 08:33:55.985782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.540 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.540 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:43.799 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.057 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:20:44.058 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:20:44.058 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:44.058 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:44.058 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:44.058 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:44.058 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:44.316 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:44.574 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:20:44.574 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:20:44.574 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:20:44.574 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:20:44.574 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:20:44.574 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:44.574 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:20:44.574 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:20:44.574 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:20:44.574 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:20:44.574 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:20:44.574 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:44.574 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:44.832 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:45.089 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:20:45.089 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:20:45.089 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:20:45.089 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:20:45.089 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:20:45.089 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:45.089 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:45.347 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.606 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.606 rmmod nvme_tcp 00:20:45.606 rmmod nvme_fabrics 00:20:45.606 rmmod nvme_keyring 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2299967 ']' 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2299967 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2299967 ']' 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2299967 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2299967 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2299967' 00:20:45.606 killing process with pid 2299967 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2299967 00:20:45.606 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2299967 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.140 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.047 00:20:50.047 real 0m11.503s 00:20:50.047 user 0m19.633s 00:20:50.047 sys 0m3.827s 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:50.047 ************************************ 00:20:50.047 END TEST nvmf_referrals 00:20:50.047 ************************************ 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.047 ************************************ 00:20:50.047 START TEST nvmf_connect_disconnect 00:20:50.047 ************************************ 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:20:50.047 * Looking for test storage... 00:20:50.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.047 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.048 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:53.334 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:53.335 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:53.335 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:53.335 Found net devices under 0000:84:00.0: cvl_0_0 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:53.335 Found net devices under 0000:84:00.1: cvl_0_1 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.335 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:53.335 00:20:53.335 --- 10.0.0.2 ping statistics --- 00:20:53.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.336 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:20:53.336 00:20:53.336 --- 10.0.0.1 ping statistics --- 00:20:53.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.336 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2302676 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2302676 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2302676 ']' 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.336 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:53.336 [2024-07-23 08:34:05.822778] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:53.336 [2024-07-23 08:34:05.822965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.594 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.594 [2024-07-23 08:34:06.059205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.161 [2024-07-23 08:34:06.583261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.161 [2024-07-23 08:34:06.583407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.161 [2024-07-23 08:34:06.583451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.161 [2024-07-23 08:34:06.583476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.161 [2024-07-23 08:34:06.583502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.161 [2024-07-23 08:34:06.583619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.161 [2024-07-23 08:34:06.583690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.161 [2024-07-23 08:34:06.583941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.161 [2024-07-23 08:34:06.583949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:55.096 [2024-07-23 08:34:07.391554] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:55.096 [2024-07-23 08:34:07.531532] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:20:55.096 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:20:57.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:00.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:02.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:04.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:07.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:09.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:11.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:14.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:16.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:18.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:21.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:23.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:26.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:28.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:30.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:33.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:35.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:37.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:40.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:42.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:45.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:47.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:49.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:52.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:54.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:56.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:59.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:01.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:03.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:06.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:07.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:10.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:13.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:14.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:17.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:20.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:22.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:25.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:27.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:29.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:32.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:34.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:37.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:39.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:41.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:44.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:46.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:49.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:51.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:53.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:56.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:58.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:00.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:03.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:05.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:08.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:10.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:12.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:15.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:17.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:19.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:22.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:24.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:26.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:29.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:31.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:33.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:36.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:38.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:41.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:43.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:45.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:48.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:50.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:53.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:55.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:57.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:00.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:02.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:04.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:07.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:09.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:12.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:14.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:16.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:19.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:21.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:23.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:26.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:28.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:30.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:33.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:35.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:38.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:40.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:42.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:45.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:47.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:49.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:52.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.392 rmmod nvme_tcp 00:24:52.392 rmmod nvme_fabrics 00:24:52.392 rmmod nvme_keyring 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2302676 ']' 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2302676 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2302676 ']' 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2302676 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2302676 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2302676' 00:24:52.392 killing process with pid 2302676 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2302676 00:24:52.392 08:38:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2302676 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.935 08:38:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.846 00:24:56.846 real 4m7.045s 00:24:56.846 user 15m32.224s 00:24:56.846 sys 0m35.404s 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:56.846 ************************************ 00:24:56.846 END TEST nvmf_connect_disconnect 00:24:56.846 ************************************ 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:56.846 ************************************ 00:24:56.846 START TEST nvmf_multitarget 00:24:56.846 ************************************ 00:24:56.846 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:24:57.106 * Looking for test storage... 00:24:57.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.106 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.107 08:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:00.406 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:00.406 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.406 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:00.407 Found net devices under 0000:84:00.0: cvl_0_0 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:00.407 Found net devices under 0000:84:00.1: cvl_0_1 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:25:00.407 00:25:00.407 --- 10.0.0.2 ping statistics --- 00:25:00.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.407 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:25:00.407 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:00.407 00:25:00.407 --- 10.0.0.1 ping statistics --- 00:25:00.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.407 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2333213 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2333213 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2333213 ']' 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.668 08:38:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:00.668 [2024-07-23 08:38:13.148510] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:00.668 [2024-07-23 08:38:13.148779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.934 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.934 [2024-07-23 08:38:13.449327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.504 [2024-07-23 08:38:13.946990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.504 [2024-07-23 08:38:13.947119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.504 [2024-07-23 08:38:13.947181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.504 [2024-07-23 08:38:13.947228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.504 [2024-07-23 08:38:13.947274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.504 [2024-07-23 08:38:13.947465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.504 [2024-07-23 08:38:13.947531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.504 [2024-07-23 08:38:13.947579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.504 [2024-07-23 08:38:13.947593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:25:02.446 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:25:02.705 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:25:02.705 08:38:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:25:02.705 "nvmf_tgt_1" 00:25:02.705 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:25:02.705 "nvmf_tgt_2" 00:25:02.965 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:25:02.965 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:25:03.224 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:25:03.224 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:25:03.224 true 00:25:03.224 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:25:03.482 true 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.482 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.482 rmmod nvme_tcp 00:25:03.482 rmmod nvme_fabrics 00:25:03.740 rmmod nvme_keyring 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2333213 ']' 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2333213 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2333213 ']' 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2333213 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333213 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333213' 00:25:03.740 killing process with pid 2333213 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2333213 00:25:03.740 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2333213 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.651 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.560 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:07.560 00:25:07.560 real 0m10.746s 00:25:07.560 user 0m17.022s 00:25:07.560 sys 0m3.692s 00:25:07.560 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:07.560 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:25:07.560 ************************************ 00:25:07.560 END TEST nvmf_multitarget 00:25:07.560 ************************************ 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:07.827 ************************************ 00:25:07.827 START TEST nvmf_rpc 00:25:07.827 ************************************ 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:25:07.827 * Looking for test storage... 00:25:07.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.827 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:07.828 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:25:11.157 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:11.158 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:11.158 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:11.158 Found net devices under 0000:84:00.0: cvl_0_0 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:11.158 Found net devices under 0000:84:00.1: cvl_0_1 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:11.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:25:11.158 00:25:11.158 --- 10.0.0.2 ping statistics --- 00:25:11.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.158 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:25:11.158 00:25:11.158 --- 10.0.0.1 ping statistics --- 00:25:11.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.158 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:11.158 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2335845 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2335845 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2335845 ']' 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.419 08:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:11.419 [2024-07-23 08:38:23.902116] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:11.419 [2024-07-23 08:38:23.902459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.679 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.939 [2024-07-23 08:38:24.217393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:12.509 [2024-07-23 08:38:24.732741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.509 [2024-07-23 08:38:24.732866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.509 [2024-07-23 08:38:24.732926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.509 [2024-07-23 08:38:24.732972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.509 [2024-07-23 08:38:24.733019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.509 [2024-07-23 08:38:24.733250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.509 [2024-07-23 08:38:24.733336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.509 [2024-07-23 08:38:24.733377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.509 [2024-07-23 08:38:24.733388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:25:13.079 "tick_rate": 2700000000, 00:25:13.079 "poll_groups": [ 00:25:13.079 { 00:25:13.079 "name": "nvmf_tgt_poll_group_000", 00:25:13.079 "admin_qpairs": 0, 00:25:13.079 "io_qpairs": 0, 00:25:13.079 "current_admin_qpairs": 0, 00:25:13.079 "current_io_qpairs": 0, 00:25:13.079 "pending_bdev_io": 0, 00:25:13.079 "completed_nvme_io": 0, 00:25:13.079 "transports": [] 00:25:13.079 }, 00:25:13.079 { 00:25:13.079 "name": "nvmf_tgt_poll_group_001", 00:25:13.079 "admin_qpairs": 0, 00:25:13.079 "io_qpairs": 0, 00:25:13.079 "current_admin_qpairs": 0, 00:25:13.079 "current_io_qpairs": 0, 00:25:13.079 "pending_bdev_io": 0, 00:25:13.079 "completed_nvme_io": 0, 00:25:13.079 "transports": [] 00:25:13.079 }, 00:25:13.079 { 00:25:13.079 "name": "nvmf_tgt_poll_group_002", 00:25:13.079 "admin_qpairs": 0, 00:25:13.079 "io_qpairs": 0, 00:25:13.079 "current_admin_qpairs": 0, 00:25:13.079 "current_io_qpairs": 0, 00:25:13.079 "pending_bdev_io": 0, 00:25:13.079 "completed_nvme_io": 0, 00:25:13.079 "transports": [] 00:25:13.079 }, 00:25:13.079 { 00:25:13.079 "name": "nvmf_tgt_poll_group_003", 00:25:13.079 "admin_qpairs": 0, 00:25:13.079 "io_qpairs": 0, 00:25:13.079 "current_admin_qpairs": 0, 00:25:13.079 "current_io_qpairs": 0, 00:25:13.079 "pending_bdev_io": 0, 00:25:13.079 "completed_nvme_io": 0, 00:25:13.079 "transports": [] 00:25:13.079 } 00:25:13.079 ] 00:25:13.079 }' 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:25:13.079 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.080 [2024-07-23 08:38:25.537664] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:25:13.080 "tick_rate": 2700000000, 00:25:13.080 "poll_groups": [ 00:25:13.080 { 00:25:13.080 "name": "nvmf_tgt_poll_group_000", 00:25:13.080 "admin_qpairs": 0, 00:25:13.080 "io_qpairs": 0, 00:25:13.080 "current_admin_qpairs": 0, 00:25:13.080 "current_io_qpairs": 0, 00:25:13.080 "pending_bdev_io": 0, 00:25:13.080 "completed_nvme_io": 0, 00:25:13.080 "transports": [ 00:25:13.080 { 00:25:13.080 "trtype": "TCP" 00:25:13.080 } 00:25:13.080 ] 00:25:13.080 }, 00:25:13.080 { 00:25:13.080 "name": "nvmf_tgt_poll_group_001", 00:25:13.080 "admin_qpairs": 0, 00:25:13.080 "io_qpairs": 0, 00:25:13.080 "current_admin_qpairs": 0, 00:25:13.080 "current_io_qpairs": 0, 00:25:13.080 "pending_bdev_io": 0, 00:25:13.080 "completed_nvme_io": 0, 00:25:13.080 "transports": [ 00:25:13.080 { 00:25:13.080 "trtype": "TCP" 00:25:13.080 } 00:25:13.080 ] 00:25:13.080 }, 00:25:13.080 { 00:25:13.080 "name": "nvmf_tgt_poll_group_002", 00:25:13.080 "admin_qpairs": 0, 00:25:13.080 "io_qpairs": 0, 00:25:13.080 "current_admin_qpairs": 0, 00:25:13.080 "current_io_qpairs": 0, 00:25:13.080 "pending_bdev_io": 0, 00:25:13.080 "completed_nvme_io": 0, 00:25:13.080 "transports": [ 00:25:13.080 { 00:25:13.080 "trtype": "TCP" 00:25:13.080 } 00:25:13.080 ] 00:25:13.080 }, 00:25:13.080 { 00:25:13.080 "name": "nvmf_tgt_poll_group_003", 00:25:13.080 "admin_qpairs": 0, 00:25:13.080 "io_qpairs": 0, 00:25:13.080 "current_admin_qpairs": 0, 00:25:13.080 "current_io_qpairs": 0, 00:25:13.080 "pending_bdev_io": 0, 00:25:13.080 "completed_nvme_io": 0, 00:25:13.080 "transports": [ 00:25:13.080 { 00:25:13.080 "trtype": "TCP" 00:25:13.080 } 00:25:13.080 ] 00:25:13.080 } 00:25:13.080 ] 00:25:13.080 }' 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:25:13.080 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.341 Malloc1 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.341 [2024-07-23 08:38:25.832339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:25:13.341 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:25:13.601 [2024-07-23 08:38:25.866024] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:25:13.601 Failed to write to /dev/nvme-fabrics: Input/output error 00:25:13.601 could not add new controller: failed to write to nvme-fabrics device 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.602 08:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:14.172 08:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:25:14.172 08:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:14.172 08:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:14.172 08:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:14.172 08:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:16.080 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:16.080 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:16.080 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:16.080 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:16.080 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:16.080 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:16.080 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:16.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:16.339 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:16.339 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:16.339 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:16.339 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:16.339 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:16.339 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:16.599 [2024-07-23 08:38:28.898100] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:25:16.599 Failed to write to /dev/nvme-fabrics: Input/output error 00:25:16.599 could not add new controller: failed to write to nvme-fabrics device 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.599 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:17.168 08:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:25:17.168 08:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:17.168 08:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.168 08:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:17.168 08:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:19.076 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:19.076 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:19.076 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:19.076 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:19.076 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.076 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:19.076 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:19.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:19.336 [2024-07-23 08:38:31.831298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.336 08:38:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:20.274 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:25:20.274 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:20.274 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.274 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:20.274 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:22.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.206 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.465 [2024-07-23 08:38:34.754859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.465 08:38:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:23.035 08:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:25:23.035 08:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:23.035 08:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.035 08:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:23.035 08:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:25.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.575 [2024-07-23 08:38:37.714892] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.575 08:38:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:26.144 08:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:25:26.144 08:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:26.144 08:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.144 08:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:26.144 08:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:28.068 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:28.068 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:28.068 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:28.068 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:28.068 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.068 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:28.068 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:28.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 [2024-07-23 08:38:40.760377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.328 08:38:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:29.267 08:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:25:29.268 08:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:29.268 08:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.268 08:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:29.268 08:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:31.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:31.178 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 [2024-07-23 08:38:43.747070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.438 08:38:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:32.008 08:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:25:32.008 08:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:25:32.008 08:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.008 08:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:32.008 08:38:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:34.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 [2024-07-23 08:38:46.703102] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 [2024-07-23 08:38:46.751171] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 [2024-07-23 08:38:46.799371] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.549 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 [2024-07-23 08:38:46.847577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 [2024-07-23 08:38:46.895757] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:25:34.550 "tick_rate": 2700000000, 00:25:34.550 "poll_groups": [ 00:25:34.550 { 00:25:34.550 "name": "nvmf_tgt_poll_group_000", 00:25:34.550 "admin_qpairs": 2, 00:25:34.550 "io_qpairs": 84, 00:25:34.550 "current_admin_qpairs": 0, 00:25:34.550 "current_io_qpairs": 0, 00:25:34.550 "pending_bdev_io": 0, 00:25:34.550 "completed_nvme_io": 262, 00:25:34.550 "transports": [ 00:25:34.550 { 00:25:34.550 "trtype": "TCP" 00:25:34.550 } 00:25:34.550 ] 00:25:34.550 }, 00:25:34.550 { 00:25:34.550 "name": "nvmf_tgt_poll_group_001", 00:25:34.550 "admin_qpairs": 2, 00:25:34.550 "io_qpairs": 84, 00:25:34.550 "current_admin_qpairs": 0, 00:25:34.550 "current_io_qpairs": 0, 00:25:34.550 "pending_bdev_io": 0, 00:25:34.550 "completed_nvme_io": 131, 00:25:34.550 "transports": [ 00:25:34.550 { 00:25:34.550 "trtype": "TCP" 00:25:34.550 } 00:25:34.550 ] 00:25:34.550 }, 00:25:34.550 { 00:25:34.550 "name": "nvmf_tgt_poll_group_002", 00:25:34.550 "admin_qpairs": 1, 00:25:34.550 "io_qpairs": 84, 00:25:34.550 "current_admin_qpairs": 0, 00:25:34.550 "current_io_qpairs": 0, 00:25:34.550 "pending_bdev_io": 0, 00:25:34.550 "completed_nvme_io": 155, 00:25:34.550 "transports": [ 00:25:34.550 { 00:25:34.550 "trtype": "TCP" 00:25:34.550 } 00:25:34.550 ] 00:25:34.550 }, 00:25:34.550 { 00:25:34.550 "name": "nvmf_tgt_poll_group_003", 00:25:34.550 "admin_qpairs": 2, 00:25:34.550 "io_qpairs": 84, 00:25:34.550 "current_admin_qpairs": 0, 00:25:34.550 "current_io_qpairs": 0, 00:25:34.550 "pending_bdev_io": 0, 00:25:34.550 "completed_nvme_io": 138, 00:25:34.550 "transports": [ 00:25:34.550 { 00:25:34.550 "trtype": "TCP" 00:25:34.550 } 00:25:34.550 ] 00:25:34.550 } 00:25:34.550 ] 00:25:34.550 }' 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:25:34.550 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:25:34.550 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:25:34.550 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:25:34.550 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:25:34.550 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:25:34.550 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:34.810 rmmod nvme_tcp 00:25:34.810 rmmod nvme_fabrics 00:25:34.810 rmmod nvme_keyring 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2335845 ']' 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2335845 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2335845 ']' 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2335845 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2335845 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2335845' 00:25:34.810 killing process with pid 2335845 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2335845 00:25:34.810 08:38:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2335845 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.352 08:38:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:39.262 00:25:39.262 real 0m31.549s 00:25:39.262 user 1m36.138s 00:25:39.262 sys 0m5.983s 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:39.262 ************************************ 00:25:39.262 END TEST nvmf_rpc 00:25:39.262 ************************************ 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:39.262 ************************************ 00:25:39.262 START TEST nvmf_invalid 00:25:39.262 ************************************ 00:25:39.262 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:25:39.522 * Looking for test storage... 00:25:39.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:25:39.522 08:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:42.818 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:42.819 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:42.819 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:42.819 Found net devices under 0000:84:00.0: cvl_0_0 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:42.819 Found net devices under 0000:84:00.1: cvl_0_1 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.819 08:38:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:42.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:25:42.819 00:25:42.819 --- 10.0.0.2 ping statistics --- 00:25:42.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.819 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:42.819 00:25:42.819 --- 10.0.0.1 ping statistics --- 00:25:42.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.819 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2340843 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2340843 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2340843 ']' 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.819 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:42.819 [2024-07-23 08:38:55.329536] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:42.819 [2024-07-23 08:38:55.329862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.079 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.339 [2024-07-23 08:38:55.645347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.909 [2024-07-23 08:38:56.141799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.909 [2024-07-23 08:38:56.141918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.909 [2024-07-23 08:38:56.141979] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.909 [2024-07-23 08:38:56.142025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.909 [2024-07-23 08:38:56.142071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.909 [2024-07-23 08:38:56.142295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.909 [2024-07-23 08:38:56.142384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.909 [2024-07-23 08:38:56.142416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.909 [2024-07-23 08:38:56.142430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:44.479 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16720 00:25:45.049 [2024-07-23 08:38:57.514789] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:25:45.049 08:38:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:25:45.049 { 00:25:45.049 "nqn": "nqn.2016-06.io.spdk:cnode16720", 00:25:45.049 "tgt_name": "foobar", 00:25:45.049 "method": "nvmf_create_subsystem", 00:25:45.049 "req_id": 1 00:25:45.049 } 00:25:45.049 Got JSON-RPC error response 00:25:45.049 response: 00:25:45.049 { 00:25:45.049 "code": -32603, 00:25:45.049 "message": "Unable to find target foobar" 00:25:45.049 }' 00:25:45.049 08:38:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:25:45.049 { 00:25:45.049 "nqn": "nqn.2016-06.io.spdk:cnode16720", 00:25:45.049 "tgt_name": "foobar", 00:25:45.049 "method": "nvmf_create_subsystem", 00:25:45.049 "req_id": 1 00:25:45.049 } 00:25:45.049 Got JSON-RPC error response 00:25:45.049 response: 00:25:45.049 { 00:25:45.049 "code": -32603, 00:25:45.049 "message": "Unable to find target foobar" 00:25:45.049 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:25:45.049 08:38:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:25:45.049 08:38:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14558 00:25:45.618 [2024-07-23 08:38:58.129151] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14558: invalid serial number 'SPDKISFASTANDAWESOME' 00:25:45.878 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:25:45.878 { 00:25:45.878 "nqn": "nqn.2016-06.io.spdk:cnode14558", 00:25:45.878 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:25:45.878 "method": "nvmf_create_subsystem", 00:25:45.878 "req_id": 1 00:25:45.878 } 00:25:45.878 Got JSON-RPC error response 00:25:45.878 response: 00:25:45.878 { 00:25:45.878 "code": -32602, 00:25:45.878 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:25:45.878 }' 00:25:45.878 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:25:45.878 { 00:25:45.878 "nqn": "nqn.2016-06.io.spdk:cnode14558", 00:25:45.878 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:25:45.878 "method": "nvmf_create_subsystem", 00:25:45.878 "req_id": 1 00:25:45.878 } 00:25:45.878 Got JSON-RPC error response 00:25:45.878 response: 00:25:45.878 { 00:25:45.878 "code": -32602, 00:25:45.878 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:25:45.878 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:25:45.878 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:25:45.878 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30475 00:25:46.488 [2024-07-23 08:38:58.739396] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30475: invalid model number 'SPDK_Controller' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:25:46.488 { 00:25:46.488 "nqn": "nqn.2016-06.io.spdk:cnode30475", 00:25:46.488 "model_number": "SPDK_Controller\u001f", 00:25:46.488 "method": "nvmf_create_subsystem", 00:25:46.488 "req_id": 1 00:25:46.488 } 00:25:46.488 Got JSON-RPC error response 00:25:46.488 response: 00:25:46.488 { 00:25:46.488 "code": -32602, 00:25:46.488 "message": "Invalid MN SPDK_Controller\u001f" 00:25:46.488 }' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:25:46.488 { 00:25:46.488 "nqn": "nqn.2016-06.io.spdk:cnode30475", 00:25:46.488 "model_number": "SPDK_Controller\u001f", 00:25:46.488 "method": "nvmf_create_subsystem", 00:25:46.488 "req_id": 1 00:25:46.488 } 00:25:46.488 Got JSON-RPC error response 00:25:46.488 response: 00:25:46.488 { 00:25:46.488 "code": -32602, 00:25:46.488 "message": "Invalid MN SPDK_Controller\u001f" 00:25:46.488 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:25:46.488 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'BjKz]>[G '\'''\''q8HamP95O+' 00:25:46.489 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'BjKz]>[G '\'''\''q8HamP95O+' nqn.2016-06.io.spdk:cnode16755 00:25:47.059 [2024-07-23 08:38:59.494248] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16755: invalid serial number 'BjKz]>[G ''q8HamP95O+' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:25:47.059 { 00:25:47.059 "nqn": "nqn.2016-06.io.spdk:cnode16755", 00:25:47.059 "serial_number": "BjKz]>[G '\'''\''q8HamP95O+", 00:25:47.059 "method": "nvmf_create_subsystem", 00:25:47.059 "req_id": 1 00:25:47.059 } 00:25:47.059 Got JSON-RPC error response 00:25:47.059 response: 00:25:47.059 { 00:25:47.059 "code": -32602, 00:25:47.059 "message": "Invalid SN BjKz]>[G '\'''\''q8HamP95O+" 00:25:47.059 }' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:25:47.059 { 00:25:47.059 "nqn": "nqn.2016-06.io.spdk:cnode16755", 00:25:47.059 "serial_number": "BjKz]>[G ''q8HamP95O+", 00:25:47.059 "method": "nvmf_create_subsystem", 00:25:47.059 "req_id": 1 00:25:47.059 } 00:25:47.059 Got JSON-RPC error response 00:25:47.059 response: 00:25:47.059 { 00:25:47.059 "code": -32602, 00:25:47.059 "message": "Invalid SN BjKz]>[G ''q8HamP95O+" 00:25:47.059 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:25:47.059 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.060 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.321 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:25:47.322 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:25:47.323 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:25:47.323 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:25:47.323 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j;s\7Qj]&+kls+lPtP*_}AzWd\QE`VL2"7BFmB5r' 00:25:47.323 08:38:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'j;s\7Qj]&+kls+lPtP*_}AzWd\QE`VL2"7BFmB5r' nqn.2016-06.io.spdk:cnode29451 00:25:47.894 [2024-07-23 08:39:00.377684] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29451: invalid model number 'j;s\7Qj]&+kls+lPtP*_}AzWd\QE`VL2"7BFmB5r' 00:25:47.894 08:39:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:25:47.894 { 00:25:47.894 "nqn": "nqn.2016-06.io.spdk:cnode29451", 00:25:47.894 "model_number": "j;s\\7Qj]&+kls+lPtP*_}AzWd\\QE\u007f`VL2\"7BFmB5r", 00:25:47.894 "method": "nvmf_create_subsystem", 00:25:47.894 "req_id": 1 00:25:47.894 } 00:25:47.894 Got JSON-RPC error response 00:25:47.894 response: 00:25:47.894 { 00:25:47.894 "code": -32602, 00:25:47.894 "message": "Invalid MN j;s\\7Qj]&+kls+lPtP*_}AzWd\\QE\u007f`VL2\"7BFmB5r" 00:25:47.894 }' 00:25:47.894 08:39:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:25:47.894 { 00:25:47.894 "nqn": "nqn.2016-06.io.spdk:cnode29451", 00:25:47.894 "model_number": "j;s\\7Qj]&+kls+lPtP*_}AzWd\\QE\u007f`VL2\"7BFmB5r", 00:25:47.894 "method": "nvmf_create_subsystem", 00:25:47.894 "req_id": 1 00:25:47.894 } 00:25:47.894 Got JSON-RPC error response 00:25:47.894 response: 00:25:47.894 { 00:25:47.894 "code": -32602, 00:25:47.894 "message": "Invalid MN j;s\\7Qj]&+kls+lPtP*_}AzWd\\QE\u007f`VL2\"7BFmB5r" 00:25:47.894 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:25:47.894 08:39:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:25:48.464 [2024-07-23 08:39:00.959930] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.725 08:39:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:25:49.295 08:39:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:25:49.295 08:39:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:25:49.295 08:39:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:25:49.295 08:39:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:25:49.295 08:39:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:25:49.866 [2024-07-23 08:39:02.196658] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:25:49.866 08:39:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:25:49.866 { 00:25:49.866 "nqn": "nqn.2016-06.io.spdk:cnode", 00:25:49.866 "listen_address": { 00:25:49.866 "trtype": "tcp", 00:25:49.866 "traddr": "", 00:25:49.866 "trsvcid": "4421" 00:25:49.866 }, 00:25:49.866 "method": "nvmf_subsystem_remove_listener", 00:25:49.866 "req_id": 1 00:25:49.866 } 00:25:49.866 Got JSON-RPC error response 00:25:49.866 response: 00:25:49.866 { 00:25:49.866 "code": -32602, 00:25:49.866 "message": "Invalid parameters" 00:25:49.866 }' 00:25:49.866 08:39:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:25:49.866 { 00:25:49.866 "nqn": "nqn.2016-06.io.spdk:cnode", 00:25:49.866 "listen_address": { 00:25:49.866 "trtype": "tcp", 00:25:49.866 "traddr": "", 00:25:49.866 "trsvcid": "4421" 00:25:49.866 }, 00:25:49.866 "method": "nvmf_subsystem_remove_listener", 00:25:49.866 "req_id": 1 00:25:49.866 } 00:25:49.866 Got JSON-RPC error response 00:25:49.866 response: 00:25:49.866 { 00:25:49.866 "code": -32602, 00:25:49.866 "message": "Invalid parameters" 00:25:49.866 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:25:49.866 08:39:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23931 -i 0 00:25:50.434 [2024-07-23 08:39:02.806916] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23931: invalid cntlid range [0-65519] 00:25:50.434 08:39:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:25:50.434 { 00:25:50.434 "nqn": "nqn.2016-06.io.spdk:cnode23931", 00:25:50.434 "min_cntlid": 0, 00:25:50.434 "method": "nvmf_create_subsystem", 00:25:50.434 "req_id": 1 00:25:50.434 } 00:25:50.434 Got JSON-RPC error response 00:25:50.434 response: 00:25:50.434 { 00:25:50.434 "code": -32602, 00:25:50.434 "message": "Invalid cntlid range [0-65519]" 00:25:50.434 }' 00:25:50.434 08:39:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:25:50.434 { 00:25:50.434 "nqn": "nqn.2016-06.io.spdk:cnode23931", 00:25:50.434 "min_cntlid": 0, 00:25:50.434 "method": "nvmf_create_subsystem", 00:25:50.434 "req_id": 1 00:25:50.434 } 00:25:50.434 Got JSON-RPC error response 00:25:50.434 response: 00:25:50.434 { 00:25:50.434 "code": -32602, 00:25:50.434 "message": "Invalid cntlid range [0-65519]" 00:25:50.434 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:50.434 08:39:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12065 -i 65520 00:25:50.693 [2024-07-23 08:39:03.116066] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12065: invalid cntlid range [65520-65519] 00:25:50.693 08:39:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:25:50.693 { 00:25:50.693 "nqn": "nqn.2016-06.io.spdk:cnode12065", 00:25:50.693 "min_cntlid": 65520, 00:25:50.693 "method": "nvmf_create_subsystem", 00:25:50.693 "req_id": 1 00:25:50.693 } 00:25:50.693 Got JSON-RPC error response 00:25:50.693 response: 00:25:50.693 { 00:25:50.693 "code": -32602, 00:25:50.693 "message": "Invalid cntlid range [65520-65519]" 00:25:50.693 }' 00:25:50.693 08:39:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:25:50.693 { 00:25:50.693 "nqn": "nqn.2016-06.io.spdk:cnode12065", 00:25:50.693 "min_cntlid": 65520, 00:25:50.693 "method": "nvmf_create_subsystem", 00:25:50.693 "req_id": 1 00:25:50.693 } 00:25:50.693 Got JSON-RPC error response 00:25:50.693 response: 00:25:50.693 { 00:25:50.693 "code": -32602, 00:25:50.693 "message": "Invalid cntlid range [65520-65519]" 00:25:50.693 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:50.693 08:39:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15085 -I 0 00:25:50.955 [2024-07-23 08:39:03.425213] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15085: invalid cntlid range [1-0] 00:25:50.955 08:39:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:25:50.955 { 00:25:50.955 "nqn": "nqn.2016-06.io.spdk:cnode15085", 00:25:50.955 "max_cntlid": 0, 00:25:50.955 "method": "nvmf_create_subsystem", 00:25:50.955 "req_id": 1 00:25:50.955 } 00:25:50.955 Got JSON-RPC error response 00:25:50.955 response: 00:25:50.955 { 00:25:50.955 "code": -32602, 00:25:50.955 "message": "Invalid cntlid range [1-0]" 00:25:50.955 }' 00:25:50.955 08:39:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:25:50.955 { 00:25:50.955 "nqn": "nqn.2016-06.io.spdk:cnode15085", 00:25:50.955 "max_cntlid": 0, 00:25:50.955 "method": "nvmf_create_subsystem", 00:25:50.955 "req_id": 1 00:25:50.955 } 00:25:50.955 Got JSON-RPC error response 00:25:50.955 response: 00:25:50.955 { 00:25:50.955 "code": -32602, 00:25:50.955 "message": "Invalid cntlid range [1-0]" 00:25:50.955 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:50.955 08:39:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3816 -I 65520 00:25:51.894 [2024-07-23 08:39:04.055641] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3816: invalid cntlid range [1-65520] 00:25:51.895 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:25:51.895 { 00:25:51.895 "nqn": "nqn.2016-06.io.spdk:cnode3816", 00:25:51.895 "max_cntlid": 65520, 00:25:51.895 "method": "nvmf_create_subsystem", 00:25:51.895 "req_id": 1 00:25:51.895 } 00:25:51.895 Got JSON-RPC error response 00:25:51.895 response: 00:25:51.895 { 00:25:51.895 "code": -32602, 00:25:51.895 "message": "Invalid cntlid range [1-65520]" 00:25:51.895 }' 00:25:51.895 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:25:51.895 { 00:25:51.895 "nqn": "nqn.2016-06.io.spdk:cnode3816", 00:25:51.895 "max_cntlid": 65520, 00:25:51.895 "method": "nvmf_create_subsystem", 00:25:51.895 "req_id": 1 00:25:51.895 } 00:25:51.895 Got JSON-RPC error response 00:25:51.895 response: 00:25:51.895 { 00:25:51.895 "code": -32602, 00:25:51.895 "message": "Invalid cntlid range [1-65520]" 00:25:51.895 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:51.895 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18895 -i 6 -I 5 00:25:52.154 [2024-07-23 08:39:04.637875] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18895: invalid cntlid range [6-5] 00:25:52.154 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:25:52.154 { 00:25:52.154 "nqn": "nqn.2016-06.io.spdk:cnode18895", 00:25:52.154 "min_cntlid": 6, 00:25:52.154 "max_cntlid": 5, 00:25:52.154 "method": "nvmf_create_subsystem", 00:25:52.154 "req_id": 1 00:25:52.154 } 00:25:52.154 Got JSON-RPC error response 00:25:52.154 response: 00:25:52.154 { 00:25:52.154 "code": -32602, 00:25:52.154 "message": "Invalid cntlid range [6-5]" 00:25:52.154 }' 00:25:52.154 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:25:52.154 { 00:25:52.154 "nqn": "nqn.2016-06.io.spdk:cnode18895", 00:25:52.154 "min_cntlid": 6, 00:25:52.154 "max_cntlid": 5, 00:25:52.154 "method": "nvmf_create_subsystem", 00:25:52.154 "req_id": 1 00:25:52.154 } 00:25:52.154 Got JSON-RPC error response 00:25:52.154 response: 00:25:52.154 { 00:25:52.154 "code": -32602, 00:25:52.154 "message": "Invalid cntlid range [6-5]" 00:25:52.154 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:25:52.154 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:25:52.414 { 00:25:52.414 "name": "foobar", 00:25:52.414 "method": "nvmf_delete_target", 00:25:52.414 "req_id": 1 00:25:52.414 } 00:25:52.414 Got JSON-RPC error response 00:25:52.414 response: 00:25:52.414 { 00:25:52.414 "code": -32602, 00:25:52.414 "message": "The specified target doesn'\''t exist, cannot delete it." 00:25:52.414 }' 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:25:52.414 { 00:25:52.414 "name": "foobar", 00:25:52.414 "method": "nvmf_delete_target", 00:25:52.414 "req_id": 1 00:25:52.414 } 00:25:52.414 Got JSON-RPC error response 00:25:52.414 response: 00:25:52.414 { 00:25:52.414 "code": -32602, 00:25:52.414 "message": "The specified target doesn't exist, cannot delete it." 00:25:52.414 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.414 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:52.414 rmmod nvme_tcp 00:25:52.673 rmmod nvme_fabrics 00:25:52.673 rmmod nvme_keyring 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2340843 ']' 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2340843 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2340843 ']' 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2340843 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:52.673 08:39:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2340843 00:25:52.673 08:39:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:52.673 08:39:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:52.673 08:39:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2340843' 00:25:52.673 killing process with pid 2340843 00:25:52.673 08:39:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2340843 00:25:52.674 08:39:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2340843 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.582 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.121 00:25:57.121 real 0m17.288s 00:25:57.121 user 0m49.444s 00:25:57.121 sys 0m4.585s 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:25:57.121 ************************************ 00:25:57.121 END TEST nvmf_invalid 00:25:57.121 ************************************ 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:57.121 ************************************ 00:25:57.121 START TEST nvmf_connect_stress 00:25:57.121 ************************************ 00:25:57.121 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:57.121 * Looking for test storage... 00:25:57.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.122 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.419 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:00.420 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:00.420 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:00.420 Found net devices under 0000:84:00.0: cvl_0_0 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:00.420 Found net devices under 0000:84:00.1: cvl_0_1 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:00.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:26:00.420 00:26:00.420 --- 10.0.0.2 ping statistics --- 00:26:00.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.420 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:26:00.420 00:26:00.420 --- 10.0.0.1 ping statistics --- 00:26:00.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.420 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2344899 00:26:00.420 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:00.421 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2344899 00:26:00.421 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2344899 ']' 00:26:00.421 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.421 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.421 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.421 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.421 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:00.421 [2024-07-23 08:39:12.588299] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:00.421 [2024-07-23 08:39:12.588498] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.421 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.421 [2024-07-23 08:39:12.782187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:00.680 [2024-07-23 08:39:13.102749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.680 [2024-07-23 08:39:13.102833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.680 [2024-07-23 08:39:13.102875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.680 [2024-07-23 08:39:13.102901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.680 [2024-07-23 08:39:13.102928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.680 [2024-07-23 08:39:13.103092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.680 [2024-07-23 08:39:13.103149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.680 [2024-07-23 08:39:13.103161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.624 08:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.624 08:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:26:01.624 08:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.624 08:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.624 08:39:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:01.624 [2024-07-23 08:39:14.028857] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:01.624 [2024-07-23 08:39:14.063774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:01.624 NULL1 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2345175 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:26:01.624 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.625 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:01.885 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.145 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.145 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:02.145 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:02.145 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.145 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:02.405 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.405 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:02.405 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:02.405 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.405 08:39:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:02.664 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.664 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:02.664 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:02.664 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.664 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:02.923 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.923 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:02.923 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:02.923 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.923 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:03.494 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.494 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:03.494 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:03.494 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.494 08:39:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:03.764 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.764 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:03.764 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:03.764 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.764 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:04.034 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.034 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:04.034 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:04.034 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.034 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:04.302 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.302 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:04.302 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:04.302 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.302 08:39:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:04.562 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.562 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:04.562 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:04.562 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.562 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:05.132 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.132 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:05.132 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:05.132 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.132 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:05.391 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.391 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:05.391 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:05.391 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.391 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:05.650 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.650 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:05.650 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:05.650 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.650 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.910 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:05.910 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:05.910 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.910 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:06.480 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.480 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:06.480 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:06.480 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.480 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:06.740 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.740 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:06.741 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:06.741 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.741 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:07.001 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.001 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:07.001 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:07.001 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.001 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:07.261 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.261 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:07.261 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:07.261 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.261 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:07.522 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.522 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:07.522 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:07.522 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.522 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.091 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.091 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:08.091 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:08.091 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.091 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.351 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.351 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:08.351 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:08.351 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.351 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.614 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.614 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:08.614 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:08.614 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.614 08:39:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.876 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.876 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:08.876 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:08.876 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.876 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:09.135 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.135 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:09.135 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:09.135 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.135 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:09.703 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.703 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:09.703 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:09.703 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.703 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:09.962 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.963 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:09.963 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:09.963 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.963 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:10.222 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.222 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:10.222 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:10.223 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.223 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:10.482 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.482 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:10.482 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:10.482 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.482 08:39:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:11.051 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.051 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:11.051 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:11.051 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.051 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:11.310 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.310 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:11.310 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:11.310 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.310 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:11.570 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.570 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:11.570 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:11.570 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.570 08:39:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:11.829 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.829 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:11.829 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:26:11.829 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.829 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:11.829 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2345175 00:26:12.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2345175) - No such process 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2345175 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:12.090 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:12.090 rmmod nvme_tcp 00:26:12.349 rmmod nvme_fabrics 00:26:12.349 rmmod nvme_keyring 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2344899 ']' 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2344899 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2344899 ']' 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2344899 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344899 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344899' 00:26:12.349 killing process with pid 2344899 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2344899 00:26:12.349 08:39:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2344899 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.257 08:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:16.166 00:26:16.166 real 0m19.316s 00:26:16.166 user 0m45.626s 00:26:16.166 sys 0m7.083s 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:26:16.166 ************************************ 00:26:16.166 END TEST nvmf_connect_stress 00:26:16.166 ************************************ 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:16.166 ************************************ 00:26:16.166 START TEST nvmf_fused_ordering 00:26:16.166 ************************************ 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:26:16.166 * Looking for test storage... 00:26:16.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:26:16.166 08:39:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:19.462 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.462 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:19.463 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:19.463 Found net devices under 0000:84:00.0: cvl_0_0 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:19.463 Found net devices under 0000:84:00.1: cvl_0_1 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:26:19.463 00:26:19.463 --- 10.0.0.2 ping statistics --- 00:26:19.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.463 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:19.463 00:26:19.463 --- 10.0.0.1 ping statistics --- 00:26:19.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.463 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2348455 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2348455 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2348455 ']' 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.463 08:39:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:19.463 [2024-07-23 08:39:31.848593] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:19.463 [2024-07-23 08:39:31.848908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.724 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.724 [2024-07-23 08:39:32.137957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.984 [2024-07-23 08:39:32.453502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.984 [2024-07-23 08:39:32.453590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.984 [2024-07-23 08:39:32.453626] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.984 [2024-07-23 08:39:32.453657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.984 [2024-07-23 08:39:32.453682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.984 [2024-07-23 08:39:32.453749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.979 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.979 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:26:20.979 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.979 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:20.979 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:20.979 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.979 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:20.980 [2024-07-23 08:39:33.340827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:20.980 [2024-07-23 08:39:33.357109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:20.980 NULL1 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.980 08:39:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:20.980 [2024-07-23 08:39:33.440756] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:20.980 [2024-07-23 08:39:33.440887] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348720 ] 00:26:21.245 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.815 Attached to nqn.2016-06.io.spdk:cnode1 00:26:21.815 Namespace ID: 1 size: 1GB 00:26:21.815 fused_ordering(0) 00:26:21.815 fused_ordering(1) 00:26:21.815 fused_ordering(2) 00:26:21.815 fused_ordering(3) 00:26:21.815 fused_ordering(4) 00:26:21.815 fused_ordering(5) 00:26:21.815 fused_ordering(6) 00:26:21.815 fused_ordering(7) 00:26:21.815 fused_ordering(8) 00:26:21.815 fused_ordering(9) 00:26:21.815 fused_ordering(10) 00:26:21.815 fused_ordering(11) 00:26:21.815 fused_ordering(12) 00:26:21.815 fused_ordering(13) 00:26:21.815 fused_ordering(14) 00:26:21.815 fused_ordering(15) 00:26:21.815 fused_ordering(16) 00:26:21.815 fused_ordering(17) 00:26:21.815 fused_ordering(18) 00:26:21.815 fused_ordering(19) 00:26:21.815 fused_ordering(20) 00:26:21.815 fused_ordering(21) 00:26:21.815 fused_ordering(22) 00:26:21.815 fused_ordering(23) 00:26:21.815 fused_ordering(24) 00:26:21.815 fused_ordering(25) 00:26:21.815 fused_ordering(26) 00:26:21.815 fused_ordering(27) 00:26:21.815 fused_ordering(28) 00:26:21.815 fused_ordering(29) 00:26:21.815 fused_ordering(30) 00:26:21.815 fused_ordering(31) 00:26:21.815 fused_ordering(32) 00:26:21.815 fused_ordering(33) 00:26:21.815 fused_ordering(34) 00:26:21.815 fused_ordering(35) 00:26:21.815 fused_ordering(36) 00:26:21.815 fused_ordering(37) 00:26:21.815 fused_ordering(38) 00:26:21.815 fused_ordering(39) 00:26:21.815 fused_ordering(40) 00:26:21.815 fused_ordering(41) 00:26:21.815 fused_ordering(42) 00:26:21.815 fused_ordering(43) 00:26:21.815 fused_ordering(44) 00:26:21.815 fused_ordering(45) 00:26:21.815 fused_ordering(46) 00:26:21.815 fused_ordering(47) 00:26:21.815 fused_ordering(48) 00:26:21.815 fused_ordering(49) 00:26:21.815 fused_ordering(50) 00:26:21.815 fused_ordering(51) 00:26:21.815 fused_ordering(52) 00:26:21.815 fused_ordering(53) 00:26:21.815 fused_ordering(54) 00:26:21.815 fused_ordering(55) 00:26:21.815 fused_ordering(56) 00:26:21.815 fused_ordering(57) 00:26:21.815 fused_ordering(58) 00:26:21.815 fused_ordering(59) 00:26:21.815 fused_ordering(60) 00:26:21.815 fused_ordering(61) 00:26:21.815 fused_ordering(62) 00:26:21.815 fused_ordering(63) 00:26:21.815 fused_ordering(64) 00:26:21.815 fused_ordering(65) 00:26:21.815 fused_ordering(66) 00:26:21.815 fused_ordering(67) 00:26:21.815 fused_ordering(68) 00:26:21.815 fused_ordering(69) 00:26:21.815 fused_ordering(70) 00:26:21.815 fused_ordering(71) 00:26:21.815 fused_ordering(72) 00:26:21.815 fused_ordering(73) 00:26:21.815 fused_ordering(74) 00:26:21.815 fused_ordering(75) 00:26:21.815 fused_ordering(76) 00:26:21.815 fused_ordering(77) 00:26:21.815 fused_ordering(78) 00:26:21.815 fused_ordering(79) 00:26:21.815 fused_ordering(80) 00:26:21.815 fused_ordering(81) 00:26:21.815 fused_ordering(82) 00:26:21.815 fused_ordering(83) 00:26:21.815 fused_ordering(84) 00:26:21.815 fused_ordering(85) 00:26:21.815 fused_ordering(86) 00:26:21.815 fused_ordering(87) 00:26:21.815 fused_ordering(88) 00:26:21.815 fused_ordering(89) 00:26:21.815 fused_ordering(90) 00:26:21.815 fused_ordering(91) 00:26:21.815 fused_ordering(92) 00:26:21.815 fused_ordering(93) 00:26:21.815 fused_ordering(94) 00:26:21.815 fused_ordering(95) 00:26:21.815 fused_ordering(96) 00:26:21.815 fused_ordering(97) 00:26:21.815 fused_ordering(98) 00:26:21.815 fused_ordering(99) 00:26:21.815 fused_ordering(100) 00:26:21.815 fused_ordering(101) 00:26:21.815 fused_ordering(102) 00:26:21.815 fused_ordering(103) 00:26:21.815 fused_ordering(104) 00:26:21.815 fused_ordering(105) 00:26:21.815 fused_ordering(106) 00:26:21.815 fused_ordering(107) 00:26:21.815 fused_ordering(108) 00:26:21.815 fused_ordering(109) 00:26:21.815 fused_ordering(110) 00:26:21.815 fused_ordering(111) 00:26:21.815 fused_ordering(112) 00:26:21.815 fused_ordering(113) 00:26:21.815 fused_ordering(114) 00:26:21.815 fused_ordering(115) 00:26:21.815 fused_ordering(116) 00:26:21.815 fused_ordering(117) 00:26:21.815 fused_ordering(118) 00:26:21.815 fused_ordering(119) 00:26:21.815 fused_ordering(120) 00:26:21.815 fused_ordering(121) 00:26:21.815 fused_ordering(122) 00:26:21.815 fused_ordering(123) 00:26:21.815 fused_ordering(124) 00:26:21.815 fused_ordering(125) 00:26:21.815 fused_ordering(126) 00:26:21.815 fused_ordering(127) 00:26:21.815 fused_ordering(128) 00:26:21.815 fused_ordering(129) 00:26:21.815 fused_ordering(130) 00:26:21.815 fused_ordering(131) 00:26:21.815 fused_ordering(132) 00:26:21.815 fused_ordering(133) 00:26:21.815 fused_ordering(134) 00:26:21.815 fused_ordering(135) 00:26:21.815 fused_ordering(136) 00:26:21.815 fused_ordering(137) 00:26:21.815 fused_ordering(138) 00:26:21.815 fused_ordering(139) 00:26:21.815 fused_ordering(140) 00:26:21.815 fused_ordering(141) 00:26:21.815 fused_ordering(142) 00:26:21.815 fused_ordering(143) 00:26:21.815 fused_ordering(144) 00:26:21.815 fused_ordering(145) 00:26:21.815 fused_ordering(146) 00:26:21.815 fused_ordering(147) 00:26:21.815 fused_ordering(148) 00:26:21.815 fused_ordering(149) 00:26:21.815 fused_ordering(150) 00:26:21.815 fused_ordering(151) 00:26:21.815 fused_ordering(152) 00:26:21.815 fused_ordering(153) 00:26:21.815 fused_ordering(154) 00:26:21.815 fused_ordering(155) 00:26:21.815 fused_ordering(156) 00:26:21.815 fused_ordering(157) 00:26:21.815 fused_ordering(158) 00:26:21.815 fused_ordering(159) 00:26:21.815 fused_ordering(160) 00:26:21.815 fused_ordering(161) 00:26:21.815 fused_ordering(162) 00:26:21.815 fused_ordering(163) 00:26:21.815 fused_ordering(164) 00:26:21.815 fused_ordering(165) 00:26:21.815 fused_ordering(166) 00:26:21.815 fused_ordering(167) 00:26:21.815 fused_ordering(168) 00:26:21.815 fused_ordering(169) 00:26:21.815 fused_ordering(170) 00:26:21.815 fused_ordering(171) 00:26:21.815 fused_ordering(172) 00:26:21.815 fused_ordering(173) 00:26:21.815 fused_ordering(174) 00:26:21.815 fused_ordering(175) 00:26:21.815 fused_ordering(176) 00:26:21.815 fused_ordering(177) 00:26:21.815 fused_ordering(178) 00:26:21.815 fused_ordering(179) 00:26:21.815 fused_ordering(180) 00:26:21.815 fused_ordering(181) 00:26:21.815 fused_ordering(182) 00:26:21.815 fused_ordering(183) 00:26:21.815 fused_ordering(184) 00:26:21.815 fused_ordering(185) 00:26:21.815 fused_ordering(186) 00:26:21.815 fused_ordering(187) 00:26:21.815 fused_ordering(188) 00:26:21.815 fused_ordering(189) 00:26:21.815 fused_ordering(190) 00:26:21.815 fused_ordering(191) 00:26:21.815 fused_ordering(192) 00:26:21.815 fused_ordering(193) 00:26:21.815 fused_ordering(194) 00:26:21.815 fused_ordering(195) 00:26:21.815 fused_ordering(196) 00:26:21.815 fused_ordering(197) 00:26:21.815 fused_ordering(198) 00:26:21.815 fused_ordering(199) 00:26:21.815 fused_ordering(200) 00:26:21.815 fused_ordering(201) 00:26:21.815 fused_ordering(202) 00:26:21.815 fused_ordering(203) 00:26:21.815 fused_ordering(204) 00:26:21.815 fused_ordering(205) 00:26:22.384 fused_ordering(206) 00:26:22.384 fused_ordering(207) 00:26:22.384 fused_ordering(208) 00:26:22.384 fused_ordering(209) 00:26:22.384 fused_ordering(210) 00:26:22.384 fused_ordering(211) 00:26:22.384 fused_ordering(212) 00:26:22.384 fused_ordering(213) 00:26:22.384 fused_ordering(214) 00:26:22.384 fused_ordering(215) 00:26:22.384 fused_ordering(216) 00:26:22.384 fused_ordering(217) 00:26:22.384 fused_ordering(218) 00:26:22.384 fused_ordering(219) 00:26:22.384 fused_ordering(220) 00:26:22.384 fused_ordering(221) 00:26:22.384 fused_ordering(222) 00:26:22.384 fused_ordering(223) 00:26:22.384 fused_ordering(224) 00:26:22.384 fused_ordering(225) 00:26:22.384 fused_ordering(226) 00:26:22.384 fused_ordering(227) 00:26:22.384 fused_ordering(228) 00:26:22.384 fused_ordering(229) 00:26:22.384 fused_ordering(230) 00:26:22.384 fused_ordering(231) 00:26:22.384 fused_ordering(232) 00:26:22.384 fused_ordering(233) 00:26:22.384 fused_ordering(234) 00:26:22.385 fused_ordering(235) 00:26:22.385 fused_ordering(236) 00:26:22.385 fused_ordering(237) 00:26:22.385 fused_ordering(238) 00:26:22.385 fused_ordering(239) 00:26:22.385 fused_ordering(240) 00:26:22.385 fused_ordering(241) 00:26:22.385 fused_ordering(242) 00:26:22.385 fused_ordering(243) 00:26:22.385 fused_ordering(244) 00:26:22.385 fused_ordering(245) 00:26:22.385 fused_ordering(246) 00:26:22.385 fused_ordering(247) 00:26:22.385 fused_ordering(248) 00:26:22.385 fused_ordering(249) 00:26:22.385 fused_ordering(250) 00:26:22.385 fused_ordering(251) 00:26:22.385 fused_ordering(252) 00:26:22.385 fused_ordering(253) 00:26:22.385 fused_ordering(254) 00:26:22.385 fused_ordering(255) 00:26:22.385 fused_ordering(256) 00:26:22.385 fused_ordering(257) 00:26:22.385 fused_ordering(258) 00:26:22.385 fused_ordering(259) 00:26:22.385 fused_ordering(260) 00:26:22.385 fused_ordering(261) 00:26:22.385 fused_ordering(262) 00:26:22.385 fused_ordering(263) 00:26:22.385 fused_ordering(264) 00:26:22.385 fused_ordering(265) 00:26:22.385 fused_ordering(266) 00:26:22.385 fused_ordering(267) 00:26:22.385 fused_ordering(268) 00:26:22.385 fused_ordering(269) 00:26:22.385 fused_ordering(270) 00:26:22.385 fused_ordering(271) 00:26:22.385 fused_ordering(272) 00:26:22.385 fused_ordering(273) 00:26:22.385 fused_ordering(274) 00:26:22.385 fused_ordering(275) 00:26:22.385 fused_ordering(276) 00:26:22.385 fused_ordering(277) 00:26:22.385 fused_ordering(278) 00:26:22.385 fused_ordering(279) 00:26:22.385 fused_ordering(280) 00:26:22.385 fused_ordering(281) 00:26:22.385 fused_ordering(282) 00:26:22.385 fused_ordering(283) 00:26:22.385 fused_ordering(284) 00:26:22.385 fused_ordering(285) 00:26:22.385 fused_ordering(286) 00:26:22.385 fused_ordering(287) 00:26:22.385 fused_ordering(288) 00:26:22.385 fused_ordering(289) 00:26:22.385 fused_ordering(290) 00:26:22.385 fused_ordering(291) 00:26:22.385 fused_ordering(292) 00:26:22.385 fused_ordering(293) 00:26:22.385 fused_ordering(294) 00:26:22.385 fused_ordering(295) 00:26:22.385 fused_ordering(296) 00:26:22.385 fused_ordering(297) 00:26:22.385 fused_ordering(298) 00:26:22.385 fused_ordering(299) 00:26:22.385 fused_ordering(300) 00:26:22.385 fused_ordering(301) 00:26:22.385 fused_ordering(302) 00:26:22.385 fused_ordering(303) 00:26:22.385 fused_ordering(304) 00:26:22.385 fused_ordering(305) 00:26:22.385 fused_ordering(306) 00:26:22.385 fused_ordering(307) 00:26:22.385 fused_ordering(308) 00:26:22.385 fused_ordering(309) 00:26:22.385 fused_ordering(310) 00:26:22.385 fused_ordering(311) 00:26:22.385 fused_ordering(312) 00:26:22.385 fused_ordering(313) 00:26:22.385 fused_ordering(314) 00:26:22.385 fused_ordering(315) 00:26:22.385 fused_ordering(316) 00:26:22.385 fused_ordering(317) 00:26:22.385 fused_ordering(318) 00:26:22.385 fused_ordering(319) 00:26:22.385 fused_ordering(320) 00:26:22.385 fused_ordering(321) 00:26:22.385 fused_ordering(322) 00:26:22.385 fused_ordering(323) 00:26:22.385 fused_ordering(324) 00:26:22.385 fused_ordering(325) 00:26:22.385 fused_ordering(326) 00:26:22.385 fused_ordering(327) 00:26:22.385 fused_ordering(328) 00:26:22.385 fused_ordering(329) 00:26:22.385 fused_ordering(330) 00:26:22.385 fused_ordering(331) 00:26:22.385 fused_ordering(332) 00:26:22.385 fused_ordering(333) 00:26:22.385 fused_ordering(334) 00:26:22.385 fused_ordering(335) 00:26:22.385 fused_ordering(336) 00:26:22.385 fused_ordering(337) 00:26:22.385 fused_ordering(338) 00:26:22.385 fused_ordering(339) 00:26:22.385 fused_ordering(340) 00:26:22.385 fused_ordering(341) 00:26:22.385 fused_ordering(342) 00:26:22.385 fused_ordering(343) 00:26:22.385 fused_ordering(344) 00:26:22.385 fused_ordering(345) 00:26:22.385 fused_ordering(346) 00:26:22.385 fused_ordering(347) 00:26:22.385 fused_ordering(348) 00:26:22.385 fused_ordering(349) 00:26:22.385 fused_ordering(350) 00:26:22.385 fused_ordering(351) 00:26:22.385 fused_ordering(352) 00:26:22.385 fused_ordering(353) 00:26:22.385 fused_ordering(354) 00:26:22.385 fused_ordering(355) 00:26:22.385 fused_ordering(356) 00:26:22.385 fused_ordering(357) 00:26:22.385 fused_ordering(358) 00:26:22.385 fused_ordering(359) 00:26:22.385 fused_ordering(360) 00:26:22.385 fused_ordering(361) 00:26:22.385 fused_ordering(362) 00:26:22.385 fused_ordering(363) 00:26:22.385 fused_ordering(364) 00:26:22.385 fused_ordering(365) 00:26:22.385 fused_ordering(366) 00:26:22.385 fused_ordering(367) 00:26:22.385 fused_ordering(368) 00:26:22.385 fused_ordering(369) 00:26:22.385 fused_ordering(370) 00:26:22.385 fused_ordering(371) 00:26:22.385 fused_ordering(372) 00:26:22.385 fused_ordering(373) 00:26:22.385 fused_ordering(374) 00:26:22.385 fused_ordering(375) 00:26:22.385 fused_ordering(376) 00:26:22.385 fused_ordering(377) 00:26:22.385 fused_ordering(378) 00:26:22.385 fused_ordering(379) 00:26:22.385 fused_ordering(380) 00:26:22.385 fused_ordering(381) 00:26:22.385 fused_ordering(382) 00:26:22.385 fused_ordering(383) 00:26:22.385 fused_ordering(384) 00:26:22.385 fused_ordering(385) 00:26:22.385 fused_ordering(386) 00:26:22.385 fused_ordering(387) 00:26:22.385 fused_ordering(388) 00:26:22.385 fused_ordering(389) 00:26:22.385 fused_ordering(390) 00:26:22.385 fused_ordering(391) 00:26:22.385 fused_ordering(392) 00:26:22.385 fused_ordering(393) 00:26:22.385 fused_ordering(394) 00:26:22.385 fused_ordering(395) 00:26:22.385 fused_ordering(396) 00:26:22.385 fused_ordering(397) 00:26:22.385 fused_ordering(398) 00:26:22.385 fused_ordering(399) 00:26:22.385 fused_ordering(400) 00:26:22.385 fused_ordering(401) 00:26:22.385 fused_ordering(402) 00:26:22.385 fused_ordering(403) 00:26:22.385 fused_ordering(404) 00:26:22.385 fused_ordering(405) 00:26:22.385 fused_ordering(406) 00:26:22.385 fused_ordering(407) 00:26:22.385 fused_ordering(408) 00:26:22.385 fused_ordering(409) 00:26:22.385 fused_ordering(410) 00:26:23.324 fused_ordering(411) 00:26:23.324 fused_ordering(412) 00:26:23.324 fused_ordering(413) 00:26:23.324 fused_ordering(414) 00:26:23.324 fused_ordering(415) 00:26:23.324 fused_ordering(416) 00:26:23.324 fused_ordering(417) 00:26:23.324 fused_ordering(418) 00:26:23.324 fused_ordering(419) 00:26:23.324 fused_ordering(420) 00:26:23.324 fused_ordering(421) 00:26:23.324 fused_ordering(422) 00:26:23.324 fused_ordering(423) 00:26:23.324 fused_ordering(424) 00:26:23.324 fused_ordering(425) 00:26:23.324 fused_ordering(426) 00:26:23.324 fused_ordering(427) 00:26:23.324 fused_ordering(428) 00:26:23.324 fused_ordering(429) 00:26:23.324 fused_ordering(430) 00:26:23.324 fused_ordering(431) 00:26:23.324 fused_ordering(432) 00:26:23.324 fused_ordering(433) 00:26:23.324 fused_ordering(434) 00:26:23.324 fused_ordering(435) 00:26:23.324 fused_ordering(436) 00:26:23.324 fused_ordering(437) 00:26:23.324 fused_ordering(438) 00:26:23.324 fused_ordering(439) 00:26:23.324 fused_ordering(440) 00:26:23.324 fused_ordering(441) 00:26:23.324 fused_ordering(442) 00:26:23.324 fused_ordering(443) 00:26:23.324 fused_ordering(444) 00:26:23.324 fused_ordering(445) 00:26:23.324 fused_ordering(446) 00:26:23.324 fused_ordering(447) 00:26:23.324 fused_ordering(448) 00:26:23.324 fused_ordering(449) 00:26:23.324 fused_ordering(450) 00:26:23.324 fused_ordering(451) 00:26:23.324 fused_ordering(452) 00:26:23.324 fused_ordering(453) 00:26:23.324 fused_ordering(454) 00:26:23.324 fused_ordering(455) 00:26:23.324 fused_ordering(456) 00:26:23.324 fused_ordering(457) 00:26:23.324 fused_ordering(458) 00:26:23.324 fused_ordering(459) 00:26:23.324 fused_ordering(460) 00:26:23.324 fused_ordering(461) 00:26:23.324 fused_ordering(462) 00:26:23.324 fused_ordering(463) 00:26:23.324 fused_ordering(464) 00:26:23.324 fused_ordering(465) 00:26:23.324 fused_ordering(466) 00:26:23.324 fused_ordering(467) 00:26:23.324 fused_ordering(468) 00:26:23.324 fused_ordering(469) 00:26:23.324 fused_ordering(470) 00:26:23.324 fused_ordering(471) 00:26:23.324 fused_ordering(472) 00:26:23.324 fused_ordering(473) 00:26:23.324 fused_ordering(474) 00:26:23.325 fused_ordering(475) 00:26:23.325 fused_ordering(476) 00:26:23.325 fused_ordering(477) 00:26:23.325 fused_ordering(478) 00:26:23.325 fused_ordering(479) 00:26:23.325 fused_ordering(480) 00:26:23.325 fused_ordering(481) 00:26:23.325 fused_ordering(482) 00:26:23.325 fused_ordering(483) 00:26:23.325 fused_ordering(484) 00:26:23.325 fused_ordering(485) 00:26:23.325 fused_ordering(486) 00:26:23.325 fused_ordering(487) 00:26:23.325 fused_ordering(488) 00:26:23.325 fused_ordering(489) 00:26:23.325 fused_ordering(490) 00:26:23.325 fused_ordering(491) 00:26:23.325 fused_ordering(492) 00:26:23.325 fused_ordering(493) 00:26:23.325 fused_ordering(494) 00:26:23.325 fused_ordering(495) 00:26:23.325 fused_ordering(496) 00:26:23.325 fused_ordering(497) 00:26:23.325 fused_ordering(498) 00:26:23.325 fused_ordering(499) 00:26:23.325 fused_ordering(500) 00:26:23.325 fused_ordering(501) 00:26:23.325 fused_ordering(502) 00:26:23.325 fused_ordering(503) 00:26:23.325 fused_ordering(504) 00:26:23.325 fused_ordering(505) 00:26:23.325 fused_ordering(506) 00:26:23.325 fused_ordering(507) 00:26:23.325 fused_ordering(508) 00:26:23.325 fused_ordering(509) 00:26:23.325 fused_ordering(510) 00:26:23.325 fused_ordering(511) 00:26:23.325 fused_ordering(512) 00:26:23.325 fused_ordering(513) 00:26:23.325 fused_ordering(514) 00:26:23.325 fused_ordering(515) 00:26:23.325 fused_ordering(516) 00:26:23.325 fused_ordering(517) 00:26:23.325 fused_ordering(518) 00:26:23.325 fused_ordering(519) 00:26:23.325 fused_ordering(520) 00:26:23.325 fused_ordering(521) 00:26:23.325 fused_ordering(522) 00:26:23.325 fused_ordering(523) 00:26:23.325 fused_ordering(524) 00:26:23.325 fused_ordering(525) 00:26:23.325 fused_ordering(526) 00:26:23.325 fused_ordering(527) 00:26:23.325 fused_ordering(528) 00:26:23.325 fused_ordering(529) 00:26:23.325 fused_ordering(530) 00:26:23.325 fused_ordering(531) 00:26:23.325 fused_ordering(532) 00:26:23.325 fused_ordering(533) 00:26:23.325 fused_ordering(534) 00:26:23.325 fused_ordering(535) 00:26:23.325 fused_ordering(536) 00:26:23.325 fused_ordering(537) 00:26:23.325 fused_ordering(538) 00:26:23.325 fused_ordering(539) 00:26:23.325 fused_ordering(540) 00:26:23.325 fused_ordering(541) 00:26:23.325 fused_ordering(542) 00:26:23.325 fused_ordering(543) 00:26:23.325 fused_ordering(544) 00:26:23.325 fused_ordering(545) 00:26:23.325 fused_ordering(546) 00:26:23.325 fused_ordering(547) 00:26:23.325 fused_ordering(548) 00:26:23.325 fused_ordering(549) 00:26:23.325 fused_ordering(550) 00:26:23.325 fused_ordering(551) 00:26:23.325 fused_ordering(552) 00:26:23.325 fused_ordering(553) 00:26:23.325 fused_ordering(554) 00:26:23.325 fused_ordering(555) 00:26:23.325 fused_ordering(556) 00:26:23.325 fused_ordering(557) 00:26:23.325 fused_ordering(558) 00:26:23.325 fused_ordering(559) 00:26:23.325 fused_ordering(560) 00:26:23.325 fused_ordering(561) 00:26:23.325 fused_ordering(562) 00:26:23.325 fused_ordering(563) 00:26:23.325 fused_ordering(564) 00:26:23.325 fused_ordering(565) 00:26:23.325 fused_ordering(566) 00:26:23.325 fused_ordering(567) 00:26:23.325 fused_ordering(568) 00:26:23.325 fused_ordering(569) 00:26:23.325 fused_ordering(570) 00:26:23.325 fused_ordering(571) 00:26:23.325 fused_ordering(572) 00:26:23.325 fused_ordering(573) 00:26:23.325 fused_ordering(574) 00:26:23.325 fused_ordering(575) 00:26:23.325 fused_ordering(576) 00:26:23.325 fused_ordering(577) 00:26:23.325 fused_ordering(578) 00:26:23.325 fused_ordering(579) 00:26:23.325 fused_ordering(580) 00:26:23.325 fused_ordering(581) 00:26:23.325 fused_ordering(582) 00:26:23.325 fused_ordering(583) 00:26:23.325 fused_ordering(584) 00:26:23.325 fused_ordering(585) 00:26:23.325 fused_ordering(586) 00:26:23.325 fused_ordering(587) 00:26:23.325 fused_ordering(588) 00:26:23.325 fused_ordering(589) 00:26:23.325 fused_ordering(590) 00:26:23.325 fused_ordering(591) 00:26:23.325 fused_ordering(592) 00:26:23.325 fused_ordering(593) 00:26:23.325 fused_ordering(594) 00:26:23.325 fused_ordering(595) 00:26:23.325 fused_ordering(596) 00:26:23.325 fused_ordering(597) 00:26:23.325 fused_ordering(598) 00:26:23.325 fused_ordering(599) 00:26:23.325 fused_ordering(600) 00:26:23.325 fused_ordering(601) 00:26:23.325 fused_ordering(602) 00:26:23.325 fused_ordering(603) 00:26:23.325 fused_ordering(604) 00:26:23.325 fused_ordering(605) 00:26:23.325 fused_ordering(606) 00:26:23.325 fused_ordering(607) 00:26:23.325 fused_ordering(608) 00:26:23.325 fused_ordering(609) 00:26:23.325 fused_ordering(610) 00:26:23.325 fused_ordering(611) 00:26:23.325 fused_ordering(612) 00:26:23.325 fused_ordering(613) 00:26:23.325 fused_ordering(614) 00:26:23.325 fused_ordering(615) 00:26:24.263 fused_ordering(616) 00:26:24.263 fused_ordering(617) 00:26:24.263 fused_ordering(618) 00:26:24.263 fused_ordering(619) 00:26:24.263 fused_ordering(620) 00:26:24.263 fused_ordering(621) 00:26:24.263 fused_ordering(622) 00:26:24.263 fused_ordering(623) 00:26:24.263 fused_ordering(624) 00:26:24.263 fused_ordering(625) 00:26:24.263 fused_ordering(626) 00:26:24.263 fused_ordering(627) 00:26:24.263 fused_ordering(628) 00:26:24.263 fused_ordering(629) 00:26:24.263 fused_ordering(630) 00:26:24.263 fused_ordering(631) 00:26:24.263 fused_ordering(632) 00:26:24.263 fused_ordering(633) 00:26:24.263 fused_ordering(634) 00:26:24.263 fused_ordering(635) 00:26:24.263 fused_ordering(636) 00:26:24.263 fused_ordering(637) 00:26:24.263 fused_ordering(638) 00:26:24.263 fused_ordering(639) 00:26:24.263 fused_ordering(640) 00:26:24.263 fused_ordering(641) 00:26:24.263 fused_ordering(642) 00:26:24.263 fused_ordering(643) 00:26:24.263 fused_ordering(644) 00:26:24.263 fused_ordering(645) 00:26:24.263 fused_ordering(646) 00:26:24.263 fused_ordering(647) 00:26:24.263 fused_ordering(648) 00:26:24.263 fused_ordering(649) 00:26:24.263 fused_ordering(650) 00:26:24.263 fused_ordering(651) 00:26:24.263 fused_ordering(652) 00:26:24.263 fused_ordering(653) 00:26:24.263 fused_ordering(654) 00:26:24.263 fused_ordering(655) 00:26:24.263 fused_ordering(656) 00:26:24.263 fused_ordering(657) 00:26:24.263 fused_ordering(658) 00:26:24.263 fused_ordering(659) 00:26:24.263 fused_ordering(660) 00:26:24.263 fused_ordering(661) 00:26:24.263 fused_ordering(662) 00:26:24.263 fused_ordering(663) 00:26:24.263 fused_ordering(664) 00:26:24.263 fused_ordering(665) 00:26:24.263 fused_ordering(666) 00:26:24.263 fused_ordering(667) 00:26:24.263 fused_ordering(668) 00:26:24.263 fused_ordering(669) 00:26:24.263 fused_ordering(670) 00:26:24.263 fused_ordering(671) 00:26:24.263 fused_ordering(672) 00:26:24.263 fused_ordering(673) 00:26:24.263 fused_ordering(674) 00:26:24.263 fused_ordering(675) 00:26:24.263 fused_ordering(676) 00:26:24.263 fused_ordering(677) 00:26:24.263 fused_ordering(678) 00:26:24.263 fused_ordering(679) 00:26:24.263 fused_ordering(680) 00:26:24.263 fused_ordering(681) 00:26:24.263 fused_ordering(682) 00:26:24.263 fused_ordering(683) 00:26:24.263 fused_ordering(684) 00:26:24.263 fused_ordering(685) 00:26:24.263 fused_ordering(686) 00:26:24.263 fused_ordering(687) 00:26:24.263 fused_ordering(688) 00:26:24.263 fused_ordering(689) 00:26:24.263 fused_ordering(690) 00:26:24.263 fused_ordering(691) 00:26:24.263 fused_ordering(692) 00:26:24.263 fused_ordering(693) 00:26:24.263 fused_ordering(694) 00:26:24.263 fused_ordering(695) 00:26:24.263 fused_ordering(696) 00:26:24.263 fused_ordering(697) 00:26:24.263 fused_ordering(698) 00:26:24.263 fused_ordering(699) 00:26:24.263 fused_ordering(700) 00:26:24.263 fused_ordering(701) 00:26:24.263 fused_ordering(702) 00:26:24.263 fused_ordering(703) 00:26:24.263 fused_ordering(704) 00:26:24.263 fused_ordering(705) 00:26:24.263 fused_ordering(706) 00:26:24.263 fused_ordering(707) 00:26:24.263 fused_ordering(708) 00:26:24.263 fused_ordering(709) 00:26:24.263 fused_ordering(710) 00:26:24.263 fused_ordering(711) 00:26:24.263 fused_ordering(712) 00:26:24.263 fused_ordering(713) 00:26:24.263 fused_ordering(714) 00:26:24.263 fused_ordering(715) 00:26:24.263 fused_ordering(716) 00:26:24.263 fused_ordering(717) 00:26:24.263 fused_ordering(718) 00:26:24.263 fused_ordering(719) 00:26:24.263 fused_ordering(720) 00:26:24.263 fused_ordering(721) 00:26:24.263 fused_ordering(722) 00:26:24.263 fused_ordering(723) 00:26:24.263 fused_ordering(724) 00:26:24.263 fused_ordering(725) 00:26:24.263 fused_ordering(726) 00:26:24.263 fused_ordering(727) 00:26:24.263 fused_ordering(728) 00:26:24.263 fused_ordering(729) 00:26:24.263 fused_ordering(730) 00:26:24.263 fused_ordering(731) 00:26:24.263 fused_ordering(732) 00:26:24.263 fused_ordering(733) 00:26:24.263 fused_ordering(734) 00:26:24.263 fused_ordering(735) 00:26:24.263 fused_ordering(736) 00:26:24.263 fused_ordering(737) 00:26:24.263 fused_ordering(738) 00:26:24.263 fused_ordering(739) 00:26:24.263 fused_ordering(740) 00:26:24.264 fused_ordering(741) 00:26:24.264 fused_ordering(742) 00:26:24.264 fused_ordering(743) 00:26:24.264 fused_ordering(744) 00:26:24.264 fused_ordering(745) 00:26:24.264 fused_ordering(746) 00:26:24.264 fused_ordering(747) 00:26:24.264 fused_ordering(748) 00:26:24.264 fused_ordering(749) 00:26:24.264 fused_ordering(750) 00:26:24.264 fused_ordering(751) 00:26:24.264 fused_ordering(752) 00:26:24.264 fused_ordering(753) 00:26:24.264 fused_ordering(754) 00:26:24.264 fused_ordering(755) 00:26:24.264 fused_ordering(756) 00:26:24.264 fused_ordering(757) 00:26:24.264 fused_ordering(758) 00:26:24.264 fused_ordering(759) 00:26:24.264 fused_ordering(760) 00:26:24.264 fused_ordering(761) 00:26:24.264 fused_ordering(762) 00:26:24.264 fused_ordering(763) 00:26:24.264 fused_ordering(764) 00:26:24.264 fused_ordering(765) 00:26:24.264 fused_ordering(766) 00:26:24.264 fused_ordering(767) 00:26:24.264 fused_ordering(768) 00:26:24.264 fused_ordering(769) 00:26:24.264 fused_ordering(770) 00:26:24.264 fused_ordering(771) 00:26:24.264 fused_ordering(772) 00:26:24.264 fused_ordering(773) 00:26:24.264 fused_ordering(774) 00:26:24.264 fused_ordering(775) 00:26:24.264 fused_ordering(776) 00:26:24.264 fused_ordering(777) 00:26:24.264 fused_ordering(778) 00:26:24.264 fused_ordering(779) 00:26:24.264 fused_ordering(780) 00:26:24.264 fused_ordering(781) 00:26:24.264 fused_ordering(782) 00:26:24.264 fused_ordering(783) 00:26:24.264 fused_ordering(784) 00:26:24.264 fused_ordering(785) 00:26:24.264 fused_ordering(786) 00:26:24.264 fused_ordering(787) 00:26:24.264 fused_ordering(788) 00:26:24.264 fused_ordering(789) 00:26:24.264 fused_ordering(790) 00:26:24.264 fused_ordering(791) 00:26:24.264 fused_ordering(792) 00:26:24.264 fused_ordering(793) 00:26:24.264 fused_ordering(794) 00:26:24.264 fused_ordering(795) 00:26:24.264 fused_ordering(796) 00:26:24.264 fused_ordering(797) 00:26:24.264 fused_ordering(798) 00:26:24.264 fused_ordering(799) 00:26:24.264 fused_ordering(800) 00:26:24.264 fused_ordering(801) 00:26:24.264 fused_ordering(802) 00:26:24.264 fused_ordering(803) 00:26:24.264 fused_ordering(804) 00:26:24.264 fused_ordering(805) 00:26:24.264 fused_ordering(806) 00:26:24.264 fused_ordering(807) 00:26:24.264 fused_ordering(808) 00:26:24.264 fused_ordering(809) 00:26:24.264 fused_ordering(810) 00:26:24.264 fused_ordering(811) 00:26:24.264 fused_ordering(812) 00:26:24.264 fused_ordering(813) 00:26:24.264 fused_ordering(814) 00:26:24.264 fused_ordering(815) 00:26:24.264 fused_ordering(816) 00:26:24.264 fused_ordering(817) 00:26:24.264 fused_ordering(818) 00:26:24.264 fused_ordering(819) 00:26:24.264 fused_ordering(820) 00:26:25.643 fused_ordering(821) 00:26:25.643 fused_ordering(822) 00:26:25.643 fused_ordering(823) 00:26:25.643 fused_ordering(824) 00:26:25.643 fused_ordering(825) 00:26:25.643 fused_ordering(826) 00:26:25.643 fused_ordering(827) 00:26:25.643 fused_ordering(828) 00:26:25.643 fused_ordering(829) 00:26:25.643 fused_ordering(830) 00:26:25.643 fused_ordering(831) 00:26:25.643 fused_ordering(832) 00:26:25.643 fused_ordering(833) 00:26:25.643 fused_ordering(834) 00:26:25.643 fused_ordering(835) 00:26:25.643 fused_ordering(836) 00:26:25.643 fused_ordering(837) 00:26:25.643 fused_ordering(838) 00:26:25.643 fused_ordering(839) 00:26:25.643 fused_ordering(840) 00:26:25.643 fused_ordering(841) 00:26:25.643 fused_ordering(842) 00:26:25.643 fused_ordering(843) 00:26:25.643 fused_ordering(844) 00:26:25.643 fused_ordering(845) 00:26:25.643 fused_ordering(846) 00:26:25.643 fused_ordering(847) 00:26:25.643 fused_ordering(848) 00:26:25.643 fused_ordering(849) 00:26:25.643 fused_ordering(850) 00:26:25.643 fused_ordering(851) 00:26:25.643 fused_ordering(852) 00:26:25.643 fused_ordering(853) 00:26:25.643 fused_ordering(854) 00:26:25.643 fused_ordering(855) 00:26:25.643 fused_ordering(856) 00:26:25.643 fused_ordering(857) 00:26:25.643 fused_ordering(858) 00:26:25.643 fused_ordering(859) 00:26:25.643 fused_ordering(860) 00:26:25.643 fused_ordering(861) 00:26:25.643 fused_ordering(862) 00:26:25.643 fused_ordering(863) 00:26:25.643 fused_ordering(864) 00:26:25.643 fused_ordering(865) 00:26:25.643 fused_ordering(866) 00:26:25.643 fused_ordering(867) 00:26:25.643 fused_ordering(868) 00:26:25.643 fused_ordering(869) 00:26:25.643 fused_ordering(870) 00:26:25.643 fused_ordering(871) 00:26:25.643 fused_ordering(872) 00:26:25.643 fused_ordering(873) 00:26:25.643 fused_ordering(874) 00:26:25.643 fused_ordering(875) 00:26:25.643 fused_ordering(876) 00:26:25.643 fused_ordering(877) 00:26:25.643 fused_ordering(878) 00:26:25.643 fused_ordering(879) 00:26:25.643 fused_ordering(880) 00:26:25.643 fused_ordering(881) 00:26:25.643 fused_ordering(882) 00:26:25.643 fused_ordering(883) 00:26:25.643 fused_ordering(884) 00:26:25.643 fused_ordering(885) 00:26:25.643 fused_ordering(886) 00:26:25.643 fused_ordering(887) 00:26:25.643 fused_ordering(888) 00:26:25.643 fused_ordering(889) 00:26:25.643 fused_ordering(890) 00:26:25.643 fused_ordering(891) 00:26:25.643 fused_ordering(892) 00:26:25.643 fused_ordering(893) 00:26:25.643 fused_ordering(894) 00:26:25.643 fused_ordering(895) 00:26:25.643 fused_ordering(896) 00:26:25.643 fused_ordering(897) 00:26:25.643 fused_ordering(898) 00:26:25.643 fused_ordering(899) 00:26:25.643 fused_ordering(900) 00:26:25.643 fused_ordering(901) 00:26:25.643 fused_ordering(902) 00:26:25.643 fused_ordering(903) 00:26:25.643 fused_ordering(904) 00:26:25.643 fused_ordering(905) 00:26:25.643 fused_ordering(906) 00:26:25.643 fused_ordering(907) 00:26:25.643 fused_ordering(908) 00:26:25.643 fused_ordering(909) 00:26:25.643 fused_ordering(910) 00:26:25.643 fused_ordering(911) 00:26:25.643 fused_ordering(912) 00:26:25.643 fused_ordering(913) 00:26:25.643 fused_ordering(914) 00:26:25.643 fused_ordering(915) 00:26:25.643 fused_ordering(916) 00:26:25.643 fused_ordering(917) 00:26:25.643 fused_ordering(918) 00:26:25.643 fused_ordering(919) 00:26:25.643 fused_ordering(920) 00:26:25.643 fused_ordering(921) 00:26:25.643 fused_ordering(922) 00:26:25.643 fused_ordering(923) 00:26:25.643 fused_ordering(924) 00:26:25.643 fused_ordering(925) 00:26:25.643 fused_ordering(926) 00:26:25.643 fused_ordering(927) 00:26:25.643 fused_ordering(928) 00:26:25.643 fused_ordering(929) 00:26:25.643 fused_ordering(930) 00:26:25.643 fused_ordering(931) 00:26:25.643 fused_ordering(932) 00:26:25.643 fused_ordering(933) 00:26:25.643 fused_ordering(934) 00:26:25.643 fused_ordering(935) 00:26:25.643 fused_ordering(936) 00:26:25.643 fused_ordering(937) 00:26:25.643 fused_ordering(938) 00:26:25.643 fused_ordering(939) 00:26:25.643 fused_ordering(940) 00:26:25.643 fused_ordering(941) 00:26:25.643 fused_ordering(942) 00:26:25.643 fused_ordering(943) 00:26:25.643 fused_ordering(944) 00:26:25.643 fused_ordering(945) 00:26:25.643 fused_ordering(946) 00:26:25.643 fused_ordering(947) 00:26:25.643 fused_ordering(948) 00:26:25.643 fused_ordering(949) 00:26:25.643 fused_ordering(950) 00:26:25.643 fused_ordering(951) 00:26:25.643 fused_ordering(952) 00:26:25.643 fused_ordering(953) 00:26:25.643 fused_ordering(954) 00:26:25.643 fused_ordering(955) 00:26:25.643 fused_ordering(956) 00:26:25.643 fused_ordering(957) 00:26:25.643 fused_ordering(958) 00:26:25.643 fused_ordering(959) 00:26:25.643 fused_ordering(960) 00:26:25.643 fused_ordering(961) 00:26:25.643 fused_ordering(962) 00:26:25.643 fused_ordering(963) 00:26:25.643 fused_ordering(964) 00:26:25.643 fused_ordering(965) 00:26:25.643 fused_ordering(966) 00:26:25.643 fused_ordering(967) 00:26:25.643 fused_ordering(968) 00:26:25.643 fused_ordering(969) 00:26:25.643 fused_ordering(970) 00:26:25.643 fused_ordering(971) 00:26:25.643 fused_ordering(972) 00:26:25.643 fused_ordering(973) 00:26:25.643 fused_ordering(974) 00:26:25.643 fused_ordering(975) 00:26:25.643 fused_ordering(976) 00:26:25.643 fused_ordering(977) 00:26:25.643 fused_ordering(978) 00:26:25.643 fused_ordering(979) 00:26:25.643 fused_ordering(980) 00:26:25.643 fused_ordering(981) 00:26:25.643 fused_ordering(982) 00:26:25.643 fused_ordering(983) 00:26:25.643 fused_ordering(984) 00:26:25.643 fused_ordering(985) 00:26:25.643 fused_ordering(986) 00:26:25.643 fused_ordering(987) 00:26:25.643 fused_ordering(988) 00:26:25.643 fused_ordering(989) 00:26:25.643 fused_ordering(990) 00:26:25.643 fused_ordering(991) 00:26:25.643 fused_ordering(992) 00:26:25.643 fused_ordering(993) 00:26:25.643 fused_ordering(994) 00:26:25.643 fused_ordering(995) 00:26:25.643 fused_ordering(996) 00:26:25.643 fused_ordering(997) 00:26:25.643 fused_ordering(998) 00:26:25.643 fused_ordering(999) 00:26:25.643 fused_ordering(1000) 00:26:25.643 fused_ordering(1001) 00:26:25.643 fused_ordering(1002) 00:26:25.643 fused_ordering(1003) 00:26:25.643 fused_ordering(1004) 00:26:25.643 fused_ordering(1005) 00:26:25.643 fused_ordering(1006) 00:26:25.643 fused_ordering(1007) 00:26:25.643 fused_ordering(1008) 00:26:25.643 fused_ordering(1009) 00:26:25.643 fused_ordering(1010) 00:26:25.643 fused_ordering(1011) 00:26:25.643 fused_ordering(1012) 00:26:25.644 fused_ordering(1013) 00:26:25.644 fused_ordering(1014) 00:26:25.644 fused_ordering(1015) 00:26:25.644 fused_ordering(1016) 00:26:25.644 fused_ordering(1017) 00:26:25.644 fused_ordering(1018) 00:26:25.644 fused_ordering(1019) 00:26:25.644 fused_ordering(1020) 00:26:25.644 fused_ordering(1021) 00:26:25.644 fused_ordering(1022) 00:26:25.644 fused_ordering(1023) 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:25.644 rmmod nvme_tcp 00:26:25.644 rmmod nvme_fabrics 00:26:25.644 rmmod nvme_keyring 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2348455 ']' 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2348455 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2348455 ']' 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2348455 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2348455 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2348455' 00:26:25.644 killing process with pid 2348455 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2348455 00:26:25.644 08:39:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2348455 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.553 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:29.461 00:26:29.461 real 0m13.118s 00:26:29.461 user 0m11.267s 00:26:29.461 sys 0m5.460s 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:26:29.461 ************************************ 00:26:29.461 END TEST nvmf_fused_ordering 00:26:29.461 ************************************ 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:29.461 ************************************ 00:26:29.461 START TEST nvmf_ns_masking 00:26:29.461 ************************************ 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:26:29.461 * Looking for test storage... 00:26:29.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.461 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6df8846d-649a-4cfe-b82b-b3828260db5c 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=43467ec1-a6f9-4644-ae53-894d9dfb464e 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=165f8a0e-40d3-4b4a-8379-b50d6fbdefb1 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:26:29.462 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:32.752 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.752 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:32.753 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:32.753 Found net devices under 0000:84:00.0: cvl_0_0 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:32.753 Found net devices under 0000:84:00.1: cvl_0_1 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.753 08:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:32.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:26:32.753 00:26:32.753 --- 10.0.0.2 ping statistics --- 00:26:32.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.753 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:26:32.753 00:26:32.753 --- 10.0.0.1 ping statistics --- 00:26:32.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.753 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2351343 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2351343 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2351343 ']' 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.753 08:39:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:33.013 [2024-07-23 08:39:45.329570] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:33.013 [2024-07-23 08:39:45.329767] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.013 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.273 [2024-07-23 08:39:45.700664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.843 [2024-07-23 08:39:46.239698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.843 [2024-07-23 08:39:46.239836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.843 [2024-07-23 08:39:46.239899] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.843 [2024-07-23 08:39:46.239955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.843 [2024-07-23 08:39:46.240008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.843 [2024-07-23 08:39:46.240123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.411 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:34.411 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:26:34.411 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:34.411 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:34.411 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:34.411 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.411 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:35.351 [2024-07-23 08:39:47.523610] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.351 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:26:35.351 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:26:35.351 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:35.920 Malloc1 00:26:35.920 08:39:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:26:36.490 Malloc2 00:26:36.490 08:39:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:37.060 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:26:37.632 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.202 [2024-07-23 08:39:50.518069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.202 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:26:38.202 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 165f8a0e-40d3-4b4a-8379-b50d6fbdefb1 -a 10.0.0.2 -s 4420 -i 4 00:26:38.202 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:26:38.202 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:26:38.202 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:38.203 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:38.203 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:40.780 [ 0]:0x1 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32eac5f61beb41e6a68aeab6c85b13b9 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32eac5f61beb41e6a68aeab6c85b13b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:40.780 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:26:41.038 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:26:41.038 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:41.038 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:41.038 [ 0]:0x1 00:26:41.038 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:41.038 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32eac5f61beb41e6a68aeab6c85b13b9 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32eac5f61beb41e6a68aeab6c85b13b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:41.296 [ 1]:0x2 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48b3ee5a480148d4a5050859922f4626 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48b3ee5a480148d4a5050859922f4626 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:26:41.296 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:41.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:41.556 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.816 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:26:42.386 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:26:42.386 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 165f8a0e-40d3-4b4a-8379-b50d6fbdefb1 -a 10.0.0.2 -s 4420 -i 4 00:26:42.646 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:26:42.646 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:26:42.646 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.646 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:26:42.646 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:26:42.646 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:26:44.553 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:44.813 [ 0]:0x2 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48b3ee5a480148d4a5050859922f4626 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48b3ee5a480148d4a5050859922f4626 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:44.813 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:45.383 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:26:45.383 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:45.383 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:45.383 [ 0]:0x1 00:26:45.383 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:45.383 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:45.643 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32eac5f61beb41e6a68aeab6c85b13b9 00:26:45.643 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32eac5f61beb41e6a68aeab6c85b13b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:45.643 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:26:45.643 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:45.643 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:45.643 [ 1]:0x2 00:26:45.643 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:45.643 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:45.643 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48b3ee5a480148d4a5050859922f4626 00:26:45.643 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48b3ee5a480148d4a5050859922f4626 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:45.643 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:46.213 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:46.473 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:46.473 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:46.473 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:46.473 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.473 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.473 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:46.474 [ 0]:0x2 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48b3ee5a480148d4a5050859922f4626 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48b3ee5a480148d4a5050859922f4626 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:26:46.474 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:46.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:46.733 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:47.303 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:26:47.303 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 165f8a0e-40d3-4b4a-8379-b50d6fbdefb1 -a 10.0.0.2 -s 4420 -i 4 00:26:47.304 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:26:47.304 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:26:47.304 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.304 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:26:47.304 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:26:47.304 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:49.845 [ 0]:0x1 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32eac5f61beb41e6a68aeab6c85b13b9 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32eac5f61beb41e6a68aeab6c85b13b9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:49.845 [ 1]:0x2 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:49.845 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:49.845 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48b3ee5a480148d4a5050859922f4626 00:26:49.845 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48b3ee5a480148d4a5050859922f4626 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:49.845 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:50.415 [ 0]:0x2 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48b3ee5a480148d4a5050859922f4626 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48b3ee5a480148d4a5050859922f4626 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:26:50.415 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:50.984 [2024-07-23 08:40:03.435433] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:26:50.984 request: 00:26:50.984 { 00:26:50.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.984 "nsid": 2, 00:26:50.984 "host": "nqn.2016-06.io.spdk:host1", 00:26:50.984 "method": "nvmf_ns_remove_host", 00:26:50.984 "req_id": 1 00:26:50.984 } 00:26:50.984 Got JSON-RPC error response 00:26:50.984 response: 00:26:50.984 { 00:26:50.984 "code": -32602, 00:26:50.984 "message": "Invalid parameters" 00:26:50.984 } 00:26:50.984 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:50.984 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:50.984 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:50.984 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:50.984 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:26:50.984 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:50.985 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:26:51.244 [ 0]:0x2 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=48b3ee5a480148d4a5050859922f4626 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 48b3ee5a480148d4a5050859922f4626 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:51.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2353510 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2353510 /var/tmp/host.sock 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2353510 ']' 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:26:51.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:26:51.244 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.245 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:51.505 [2024-07-23 08:40:03.910141] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:51.505 [2024-07-23 08:40:03.910460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353510 ] 00:26:51.764 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.764 [2024-07-23 08:40:04.171648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.024 [2024-07-23 08:40:04.490871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.405 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.405 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:26:53.405 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:53.974 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:54.542 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6df8846d-649a-4cfe-b82b-b3828260db5c 00:26:54.542 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:26:54.542 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6DF8846D649A4CFEB82BB3828260DB5C -i 00:26:55.112 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 43467ec1-a6f9-4644-ae53-894d9dfb464e 00:26:55.113 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:26:55.113 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 43467EC1A6F94644AE53894D9DFB464E -i 00:26:55.681 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:56.252 08:40:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:26:56.522 08:40:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:26:56.522 08:40:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:26:57.472 nvme0n1 00:26:57.472 08:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:26:57.472 08:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:26:58.042 nvme1n2 00:26:58.042 08:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:26:58.042 08:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:26:58.042 08:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:26:58.042 08:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:26:58.042 08:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:26:58.981 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:26:58.981 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:26:58.981 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:26:58.981 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:26:59.551 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6df8846d-649a-4cfe-b82b-b3828260db5c == \6\d\f\8\8\4\6\d\-\6\4\9\a\-\4\c\f\e\-\b\8\2\b\-\b\3\8\2\8\2\6\0\d\b\5\c ]] 00:26:59.552 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:26:59.552 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:26:59.552 08:40:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 43467ec1-a6f9-4644-ae53-894d9dfb464e == \4\3\4\6\7\e\c\1\-\a\6\f\9\-\4\6\4\4\-\a\e\5\3\-\8\9\4\d\9\d\f\b\4\6\4\e ]] 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2353510 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2353510 ']' 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2353510 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353510 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:00.119 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353510' 00:27:00.119 killing process with pid 2353510 00:27:00.120 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2353510 00:27:00.120 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2353510 00:27:03.415 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.676 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.676 rmmod nvme_tcp 00:27:03.676 rmmod nvme_fabrics 00:27:03.676 rmmod nvme_keyring 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2351343 ']' 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2351343 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2351343 ']' 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2351343 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2351343 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2351343' 00:27:03.676 killing process with pid 2351343 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2351343 00:27:03.676 08:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2351343 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.968 08:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.877 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:08.877 00:27:08.877 real 0m39.310s 00:27:08.877 user 1m0.176s 00:27:08.877 sys 0m7.576s 00:27:08.877 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:08.877 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:08.877 ************************************ 00:27:08.877 END TEST nvmf_ns_masking 00:27:08.877 ************************************ 00:27:08.877 08:40:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:08.877 08:40:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:27:08.877 08:40:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:27:08.877 08:40:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:08.878 ************************************ 00:27:08.878 START TEST nvmf_nvme_cli 00:27:08.878 ************************************ 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:27:08.878 * Looking for test storage... 00:27:08.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:27:08.878 08:40:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:12.170 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:12.170 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:12.170 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:12.171 Found net devices under 0000:84:00.0: cvl_0_0 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:12.171 Found net devices under 0000:84:00.1: cvl_0_1 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:27:12.171 00:27:12.171 --- 10.0.0.2 ping statistics --- 00:27:12.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.171 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:12.171 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:27:12.432 00:27:12.432 --- 10.0.0.1 ping statistics --- 00:27:12.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.432 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2357300 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2357300 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2357300 ']' 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.432 08:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:12.432 [2024-07-23 08:40:24.943036] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:12.432 [2024-07-23 08:40:24.943366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.692 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.953 [2024-07-23 08:40:25.270582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.523 [2024-07-23 08:40:25.776629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.523 [2024-07-23 08:40:25.776710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.523 [2024-07-23 08:40:25.776744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.523 [2024-07-23 08:40:25.776770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.523 [2024-07-23 08:40:25.776805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.523 [2024-07-23 08:40:25.776930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.523 [2024-07-23 08:40:25.776996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.523 [2024-07-23 08:40:25.777049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.523 [2024-07-23 08:40:25.777062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.093 [2024-07-23 08:40:26.582607] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.093 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.360 Malloc0 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.360 Malloc1 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.360 [2024-07-23 08:40:26.822389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.360 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:27:14.640 00:27:14.640 Discovery Log Number of Records 2, Generation counter 2 00:27:14.640 =====Discovery Log Entry 0====== 00:27:14.640 trtype: tcp 00:27:14.640 adrfam: ipv4 00:27:14.640 subtype: current discovery subsystem 00:27:14.640 treq: not required 00:27:14.640 portid: 0 00:27:14.640 trsvcid: 4420 00:27:14.640 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:14.640 traddr: 10.0.0.2 00:27:14.640 eflags: explicit discovery connections, duplicate discovery information 00:27:14.640 sectype: none 00:27:14.640 =====Discovery Log Entry 1====== 00:27:14.640 trtype: tcp 00:27:14.640 adrfam: ipv4 00:27:14.640 subtype: nvme subsystem 00:27:14.640 treq: not required 00:27:14.640 portid: 0 00:27:14.640 trsvcid: 4420 00:27:14.640 subnqn: nqn.2016-06.io.spdk:cnode1 00:27:14.640 traddr: 10.0.0.2 00:27:14.640 eflags: none 00:27:14.640 sectype: none 00:27:14.640 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:27:14.640 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:27:14.640 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:27:14.640 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:14.640 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:27:14.640 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:27:14.641 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:14.641 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:27:14.641 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:14.641 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:27:14.641 08:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:15.227 08:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:27:15.227 08:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:27:15.227 08:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:15.227 08:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:27:15.227 08:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:27:15.227 08:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:27:17.767 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:17.767 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:17.767 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:17.767 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:27:17.767 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:27:17.768 /dev/nvme0n1 ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:17.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.768 08:40:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.768 rmmod nvme_tcp 00:27:17.768 rmmod nvme_fabrics 00:27:17.768 rmmod nvme_keyring 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2357300 ']' 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2357300 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2357300 ']' 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2357300 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2357300 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2357300' 00:27:17.768 killing process with pid 2357300 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2357300 00:27:17.768 08:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2357300 00:27:20.309 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.309 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.309 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.309 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.310 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.310 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.310 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.310 08:40:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:22.218 00:27:22.218 real 0m13.604s 00:27:22.218 user 0m25.985s 00:27:22.218 sys 0m3.950s 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:27:22.218 ************************************ 00:27:22.218 END TEST nvmf_nvme_cli 00:27:22.218 ************************************ 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.218 08:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:22.479 ************************************ 00:27:22.479 START TEST nvmf_auth_target 00:27:22.479 ************************************ 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:22.479 * Looking for test storage... 00:27:22.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.479 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.480 08:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:25.778 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:25.778 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.778 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:25.779 Found net devices under 0000:84:00.0: cvl_0_0 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:25.779 Found net devices under 0000:84:00.1: cvl_0_1 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:25.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:27:25.779 00:27:25.779 --- 10.0.0.2 ping statistics --- 00:27:25.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.779 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:27:25.779 00:27:25.779 --- 10.0.0.1 ping statistics --- 00:27:25.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.779 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.779 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2360217 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2360217 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2360217 ']' 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.038 08:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2360371 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8295d6f49f16b5ad54fb140578038181ea8972e84ad7bea2 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.v0v 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8295d6f49f16b5ad54fb140578038181ea8972e84ad7bea2 0 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8295d6f49f16b5ad54fb140578038181ea8972e84ad7bea2 0 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8295d6f49f16b5ad54fb140578038181ea8972e84ad7bea2 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.v0v 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.v0v 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.v0v 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:27.952 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=559fee04dd3e0722c78d4bb6726bf5db789146c6f3b467102b4e1e96759ece5a 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.17d 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 559fee04dd3e0722c78d4bb6726bf5db789146c6f3b467102b4e1e96759ece5a 3 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 559fee04dd3e0722c78d4bb6726bf5db789146c6f3b467102b4e1e96759ece5a 3 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=559fee04dd3e0722c78d4bb6726bf5db789146c6f3b467102b4e1e96759ece5a 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.17d 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.17d 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.17d 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b725df913516f11a114353f6ffa62270 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8TQ 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b725df913516f11a114353f6ffa62270 1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b725df913516f11a114353f6ffa62270 1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b725df913516f11a114353f6ffa62270 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8TQ 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8TQ 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.8TQ 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=81c0f36b49af2d9be2d5c8a3ed7bcc3b231faf5c640d34af 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lyE 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 81c0f36b49af2d9be2d5c8a3ed7bcc3b231faf5c640d34af 2 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 81c0f36b49af2d9be2d5c8a3ed7bcc3b231faf5c640d34af 2 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=81c0f36b49af2d9be2d5c8a3ed7bcc3b231faf5c640d34af 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lyE 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lyE 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.lyE 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b91b7a134764c1a7ab3d4acd47b03858df48c576135a9fa6 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ovk 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b91b7a134764c1a7ab3d4acd47b03858df48c576135a9fa6 2 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b91b7a134764c1a7ab3d4acd47b03858df48c576135a9fa6 2 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b91b7a134764c1a7ab3d4acd47b03858df48c576135a9fa6 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ovk 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ovk 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Ovk 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=83ebe5e153346334a46a5c97e80e3ae9 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aDt 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 83ebe5e153346334a46a5c97e80e3ae9 1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 83ebe5e153346334a46a5c97e80e3ae9 1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=83ebe5e153346334a46a5c97e80e3ae9 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aDt 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aDt 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.aDt 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:27.953 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42b8bb1dbf7c03ed7af2a97e0bdbd75f89ea11470af771d82c3fc675f1c2d4f5 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1E8 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42b8bb1dbf7c03ed7af2a97e0bdbd75f89ea11470af771d82c3fc675f1c2d4f5 3 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42b8bb1dbf7c03ed7af2a97e0bdbd75f89ea11470af771d82c3fc675f1c2d4f5 3 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42b8bb1dbf7c03ed7af2a97e0bdbd75f89ea11470af771d82c3fc675f1c2d4f5 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1E8 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1E8 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.1E8 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2360217 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2360217 ']' 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.954 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2360371 /var/tmp/host.sock 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2360371 ']' 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:27:28.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.520 08:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.v0v 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.v0v 00:27:29.456 08:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.v0v 00:27:30.025 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.17d ]] 00:27:30.025 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.17d 00:27:30.026 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.026 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.026 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.026 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.17d 00:27:30.026 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.17d 00:27:30.286 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:30.286 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8TQ 00:27:30.286 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.286 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.286 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.286 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8TQ 00:27:30.286 08:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8TQ 00:27:30.856 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.lyE ]] 00:27:30.856 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lyE 00:27:30.856 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.856 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.856 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.856 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lyE 00:27:30.856 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lyE 00:27:31.427 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:31.427 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ovk 00:27:31.427 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.427 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.427 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.427 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Ovk 00:27:31.427 08:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Ovk 00:27:31.997 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.aDt ]] 00:27:31.997 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aDt 00:27:31.997 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.997 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.997 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.997 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aDt 00:27:31.997 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aDt 00:27:32.574 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:32.574 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1E8 00:27:32.574 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.574 08:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.575 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1E8 00:27:32.575 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1E8 00:27:32.866 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:27:32.866 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:27:32.866 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.866 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:32.866 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:32.866 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.440 08:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.007 00:27:34.007 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:34.007 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:34.007 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:34.269 { 00:27:34.269 "cntlid": 1, 00:27:34.269 "qid": 0, 00:27:34.269 "state": "enabled", 00:27:34.269 "thread": "nvmf_tgt_poll_group_000", 00:27:34.269 "listen_address": { 00:27:34.269 "trtype": "TCP", 00:27:34.269 "adrfam": "IPv4", 00:27:34.269 "traddr": "10.0.0.2", 00:27:34.269 "trsvcid": "4420" 00:27:34.269 }, 00:27:34.269 "peer_address": { 00:27:34.269 "trtype": "TCP", 00:27:34.269 "adrfam": "IPv4", 00:27:34.269 "traddr": "10.0.0.1", 00:27:34.269 "trsvcid": "37858" 00:27:34.269 }, 00:27:34.269 "auth": { 00:27:34.269 "state": "completed", 00:27:34.269 "digest": "sha256", 00:27:34.269 "dhgroup": "null" 00:27:34.269 } 00:27:34.269 } 00:27:34.269 ]' 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:34.269 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:34.532 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:34.532 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:34.532 08:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:34.792 08:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:36.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:36.702 08:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.702 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.270 00:27:37.270 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:37.270 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:37.270 08:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:37.838 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.838 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:37.838 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.838 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:37.838 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.838 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:37.838 { 00:27:37.838 "cntlid": 3, 00:27:37.838 "qid": 0, 00:27:37.838 "state": "enabled", 00:27:37.838 "thread": "nvmf_tgt_poll_group_000", 00:27:37.839 "listen_address": { 00:27:37.839 "trtype": "TCP", 00:27:37.839 "adrfam": "IPv4", 00:27:37.839 "traddr": "10.0.0.2", 00:27:37.839 "trsvcid": "4420" 00:27:37.839 }, 00:27:37.839 "peer_address": { 00:27:37.839 "trtype": "TCP", 00:27:37.839 "adrfam": "IPv4", 00:27:37.839 "traddr": "10.0.0.1", 00:27:37.839 "trsvcid": "57170" 00:27:37.839 }, 00:27:37.839 "auth": { 00:27:37.839 "state": "completed", 00:27:37.839 "digest": "sha256", 00:27:37.839 "dhgroup": "null" 00:27:37.839 } 00:27:37.839 } 00:27:37.839 ]' 00:27:37.839 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:37.839 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:37.839 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:37.839 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:37.839 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:38.097 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:38.097 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:38.097 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:38.355 08:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:39.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:39.735 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.304 08:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.874 00:27:40.874 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:40.874 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:40.874 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:41.442 { 00:27:41.442 "cntlid": 5, 00:27:41.442 "qid": 0, 00:27:41.442 "state": "enabled", 00:27:41.442 "thread": "nvmf_tgt_poll_group_000", 00:27:41.442 "listen_address": { 00:27:41.442 "trtype": "TCP", 00:27:41.442 "adrfam": "IPv4", 00:27:41.442 "traddr": "10.0.0.2", 00:27:41.442 "trsvcid": "4420" 00:27:41.442 }, 00:27:41.442 "peer_address": { 00:27:41.442 "trtype": "TCP", 00:27:41.442 "adrfam": "IPv4", 00:27:41.442 "traddr": "10.0.0.1", 00:27:41.442 "trsvcid": "57194" 00:27:41.442 }, 00:27:41.442 "auth": { 00:27:41.442 "state": "completed", 00:27:41.442 "digest": "sha256", 00:27:41.442 "dhgroup": "null" 00:27:41.442 } 00:27:41.442 } 00:27:41.442 ]' 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:41.442 08:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:42.381 08:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:43.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:43.319 08:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:43.890 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:44.457 00:27:44.457 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:44.457 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:44.457 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:44.715 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.715 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:44.715 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.715 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:44.715 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.715 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:44.715 { 00:27:44.715 "cntlid": 7, 00:27:44.715 "qid": 0, 00:27:44.715 "state": "enabled", 00:27:44.715 "thread": "nvmf_tgt_poll_group_000", 00:27:44.715 "listen_address": { 00:27:44.715 "trtype": "TCP", 00:27:44.715 "adrfam": "IPv4", 00:27:44.715 "traddr": "10.0.0.2", 00:27:44.715 "trsvcid": "4420" 00:27:44.715 }, 00:27:44.715 "peer_address": { 00:27:44.715 "trtype": "TCP", 00:27:44.715 "adrfam": "IPv4", 00:27:44.715 "traddr": "10.0.0.1", 00:27:44.715 "trsvcid": "57226" 00:27:44.715 }, 00:27:44.715 "auth": { 00:27:44.715 "state": "completed", 00:27:44.715 "digest": "sha256", 00:27:44.715 "dhgroup": "null" 00:27:44.715 } 00:27:44.715 } 00:27:44.715 ]' 00:27:44.715 08:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:44.715 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:44.715 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:44.715 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:44.715 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:44.715 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:44.715 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:44.716 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:44.974 08:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:27:46.351 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:46.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:46.351 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:46.351 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.351 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.352 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.352 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.352 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:46.352 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.352 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.612 08:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.871 00:27:46.871 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:46.871 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:46.871 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:47.447 { 00:27:47.447 "cntlid": 9, 00:27:47.447 "qid": 0, 00:27:47.447 "state": "enabled", 00:27:47.447 "thread": "nvmf_tgt_poll_group_000", 00:27:47.447 "listen_address": { 00:27:47.447 "trtype": "TCP", 00:27:47.447 "adrfam": "IPv4", 00:27:47.447 "traddr": "10.0.0.2", 00:27:47.447 "trsvcid": "4420" 00:27:47.447 }, 00:27:47.447 "peer_address": { 00:27:47.447 "trtype": "TCP", 00:27:47.447 "adrfam": "IPv4", 00:27:47.447 "traddr": "10.0.0.1", 00:27:47.447 "trsvcid": "60518" 00:27:47.447 }, 00:27:47.447 "auth": { 00:27:47.447 "state": "completed", 00:27:47.447 "digest": "sha256", 00:27:47.447 "dhgroup": "ffdhe2048" 00:27:47.447 } 00:27:47.447 } 00:27:47.447 ]' 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:47.447 08:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:48.395 08:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:49.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.773 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.032 08:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.599 00:27:50.599 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:50.599 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:50.599 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:51.169 { 00:27:51.169 "cntlid": 11, 00:27:51.169 "qid": 0, 00:27:51.169 "state": "enabled", 00:27:51.169 "thread": "nvmf_tgt_poll_group_000", 00:27:51.169 "listen_address": { 00:27:51.169 "trtype": "TCP", 00:27:51.169 "adrfam": "IPv4", 00:27:51.169 "traddr": "10.0.0.2", 00:27:51.169 "trsvcid": "4420" 00:27:51.169 }, 00:27:51.169 "peer_address": { 00:27:51.169 "trtype": "TCP", 00:27:51.169 "adrfam": "IPv4", 00:27:51.169 "traddr": "10.0.0.1", 00:27:51.169 "trsvcid": "60534" 00:27:51.169 }, 00:27:51.169 "auth": { 00:27:51.169 "state": "completed", 00:27:51.169 "digest": "sha256", 00:27:51.169 "dhgroup": "ffdhe2048" 00:27:51.169 } 00:27:51.169 } 00:27:51.169 ]' 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:51.169 08:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:51.738 08:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:53.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.115 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.685 08:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.685 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.685 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.685 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.944 00:27:53.944 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:53.944 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:53.944 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:54.510 { 00:27:54.510 "cntlid": 13, 00:27:54.510 "qid": 0, 00:27:54.510 "state": "enabled", 00:27:54.510 "thread": "nvmf_tgt_poll_group_000", 00:27:54.510 "listen_address": { 00:27:54.510 "trtype": "TCP", 00:27:54.510 "adrfam": "IPv4", 00:27:54.510 "traddr": "10.0.0.2", 00:27:54.510 "trsvcid": "4420" 00:27:54.510 }, 00:27:54.510 "peer_address": { 00:27:54.510 "trtype": "TCP", 00:27:54.510 "adrfam": "IPv4", 00:27:54.510 "traddr": "10.0.0.1", 00:27:54.510 "trsvcid": "60558" 00:27:54.510 }, 00:27:54.510 "auth": { 00:27:54.510 "state": "completed", 00:27:54.510 "digest": "sha256", 00:27:54.510 "dhgroup": "ffdhe2048" 00:27:54.510 } 00:27:54.510 } 00:27:54.510 ]' 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:54.510 08:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:54.770 08:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:27:56.150 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:56.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:56.410 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:56.410 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.410 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:56.410 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.410 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:56.410 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:56.410 08:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:56.980 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:57.548 00:27:57.548 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:57.548 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:57.548 08:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:57.808 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.808 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:57.808 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.808 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.808 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.808 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:57.808 { 00:27:57.808 "cntlid": 15, 00:27:57.808 "qid": 0, 00:27:57.808 "state": "enabled", 00:27:57.808 "thread": "nvmf_tgt_poll_group_000", 00:27:57.808 "listen_address": { 00:27:57.808 "trtype": "TCP", 00:27:57.808 "adrfam": "IPv4", 00:27:57.808 "traddr": "10.0.0.2", 00:27:57.808 "trsvcid": "4420" 00:27:57.808 }, 00:27:57.808 "peer_address": { 00:27:57.808 "trtype": "TCP", 00:27:57.808 "adrfam": "IPv4", 00:27:57.808 "traddr": "10.0.0.1", 00:27:57.808 "trsvcid": "33070" 00:27:57.808 }, 00:27:57.808 "auth": { 00:27:57.808 "state": "completed", 00:27:57.808 "digest": "sha256", 00:27:57.808 "dhgroup": "ffdhe2048" 00:27:57.808 } 00:27:57.808 } 00:27:57.808 ]' 00:27:57.808 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:58.067 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:58.067 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:58.067 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:58.067 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:58.067 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:58.067 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:58.067 08:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:58.634 08:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:00.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:00.541 08:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.799 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.367 00:28:01.367 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:01.367 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:01.367 08:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:01.935 { 00:28:01.935 "cntlid": 17, 00:28:01.935 "qid": 0, 00:28:01.935 "state": "enabled", 00:28:01.935 "thread": "nvmf_tgt_poll_group_000", 00:28:01.935 "listen_address": { 00:28:01.935 "trtype": "TCP", 00:28:01.935 "adrfam": "IPv4", 00:28:01.935 "traddr": "10.0.0.2", 00:28:01.935 "trsvcid": "4420" 00:28:01.935 }, 00:28:01.935 "peer_address": { 00:28:01.935 "trtype": "TCP", 00:28:01.935 "adrfam": "IPv4", 00:28:01.935 "traddr": "10.0.0.1", 00:28:01.935 "trsvcid": "33096" 00:28:01.935 }, 00:28:01.935 "auth": { 00:28:01.935 "state": "completed", 00:28:01.935 "digest": "sha256", 00:28:01.935 "dhgroup": "ffdhe3072" 00:28:01.935 } 00:28:01.935 } 00:28:01.935 ]' 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:01.935 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:02.505 08:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:03.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:03.920 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.179 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.437 00:28:04.437 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:04.437 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:04.437 08:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:04.697 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.697 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:04.697 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.697 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.697 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.697 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:04.697 { 00:28:04.697 "cntlid": 19, 00:28:04.697 "qid": 0, 00:28:04.697 "state": "enabled", 00:28:04.697 "thread": "nvmf_tgt_poll_group_000", 00:28:04.697 "listen_address": { 00:28:04.697 "trtype": "TCP", 00:28:04.697 "adrfam": "IPv4", 00:28:04.697 "traddr": "10.0.0.2", 00:28:04.697 "trsvcid": "4420" 00:28:04.697 }, 00:28:04.697 "peer_address": { 00:28:04.697 "trtype": "TCP", 00:28:04.697 "adrfam": "IPv4", 00:28:04.697 "traddr": "10.0.0.1", 00:28:04.697 "trsvcid": "33124" 00:28:04.697 }, 00:28:04.697 "auth": { 00:28:04.697 "state": "completed", 00:28:04.697 "digest": "sha256", 00:28:04.697 "dhgroup": "ffdhe3072" 00:28:04.697 } 00:28:04.697 } 00:28:04.697 ]' 00:28:04.697 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:04.957 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:04.957 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:04.957 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:04.957 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:04.957 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:04.957 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:04.957 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:05.527 08:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:06.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:06.901 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.161 08:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.728 00:28:07.728 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:07.728 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:07.728 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:07.986 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.986 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:07.986 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.986 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.986 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.986 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:07.986 { 00:28:07.986 "cntlid": 21, 00:28:07.986 "qid": 0, 00:28:07.986 "state": "enabled", 00:28:07.986 "thread": "nvmf_tgt_poll_group_000", 00:28:07.986 "listen_address": { 00:28:07.986 "trtype": "TCP", 00:28:07.986 "adrfam": "IPv4", 00:28:07.987 "traddr": "10.0.0.2", 00:28:07.987 "trsvcid": "4420" 00:28:07.987 }, 00:28:07.987 "peer_address": { 00:28:07.987 "trtype": "TCP", 00:28:07.987 "adrfam": "IPv4", 00:28:07.987 "traddr": "10.0.0.1", 00:28:07.987 "trsvcid": "55640" 00:28:07.987 }, 00:28:07.987 "auth": { 00:28:07.987 "state": "completed", 00:28:07.987 "digest": "sha256", 00:28:07.987 "dhgroup": "ffdhe3072" 00:28:07.987 } 00:28:07.987 } 00:28:07.987 ]' 00:28:07.987 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:07.987 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:07.987 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:08.245 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:08.245 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:08.245 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:08.245 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:08.245 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:08.502 08:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:28:09.880 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:09.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:09.880 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:09.881 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.881 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.881 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.881 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:09.881 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:09.881 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:10.139 08:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:10.709 00:28:10.709 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:10.709 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:10.709 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:10.981 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.981 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:10.981 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.981 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.981 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.981 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:10.981 { 00:28:10.981 "cntlid": 23, 00:28:10.981 "qid": 0, 00:28:10.981 "state": "enabled", 00:28:10.981 "thread": "nvmf_tgt_poll_group_000", 00:28:10.981 "listen_address": { 00:28:10.981 "trtype": "TCP", 00:28:10.981 "adrfam": "IPv4", 00:28:10.981 "traddr": "10.0.0.2", 00:28:10.981 "trsvcid": "4420" 00:28:10.981 }, 00:28:10.981 "peer_address": { 00:28:10.981 "trtype": "TCP", 00:28:10.981 "adrfam": "IPv4", 00:28:10.981 "traddr": "10.0.0.1", 00:28:10.981 "trsvcid": "55664" 00:28:10.981 }, 00:28:10.981 "auth": { 00:28:10.981 "state": "completed", 00:28:10.981 "digest": "sha256", 00:28:10.981 "dhgroup": "ffdhe3072" 00:28:10.981 } 00:28:10.981 } 00:28:10.981 ]' 00:28:10.981 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:11.242 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:11.242 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:11.242 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:11.242 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:11.242 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:11.242 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:11.242 08:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:11.811 08:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:13.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:13.193 08:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:13.760 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:28:13.760 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:13.760 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:13.760 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:13.760 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:13.760 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:13.761 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.761 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.761 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.761 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.761 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.761 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.057 00:28:14.057 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:14.057 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:14.057 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:14.316 { 00:28:14.316 "cntlid": 25, 00:28:14.316 "qid": 0, 00:28:14.316 "state": "enabled", 00:28:14.316 "thread": "nvmf_tgt_poll_group_000", 00:28:14.316 "listen_address": { 00:28:14.316 "trtype": "TCP", 00:28:14.316 "adrfam": "IPv4", 00:28:14.316 "traddr": "10.0.0.2", 00:28:14.316 "trsvcid": "4420" 00:28:14.316 }, 00:28:14.316 "peer_address": { 00:28:14.316 "trtype": "TCP", 00:28:14.316 "adrfam": "IPv4", 00:28:14.316 "traddr": "10.0.0.1", 00:28:14.316 "trsvcid": "55688" 00:28:14.316 }, 00:28:14.316 "auth": { 00:28:14.316 "state": "completed", 00:28:14.316 "digest": "sha256", 00:28:14.316 "dhgroup": "ffdhe4096" 00:28:14.316 } 00:28:14.316 } 00:28:14.316 ]' 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:14.316 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:14.574 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:14.574 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:14.574 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:14.574 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:14.574 08:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:15.141 08:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:16.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.515 08:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.476 00:28:17.476 08:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:17.476 08:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:17.476 08:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:17.734 { 00:28:17.734 "cntlid": 27, 00:28:17.734 "qid": 0, 00:28:17.734 "state": "enabled", 00:28:17.734 "thread": "nvmf_tgt_poll_group_000", 00:28:17.734 "listen_address": { 00:28:17.734 "trtype": "TCP", 00:28:17.734 "adrfam": "IPv4", 00:28:17.734 "traddr": "10.0.0.2", 00:28:17.734 "trsvcid": "4420" 00:28:17.734 }, 00:28:17.734 "peer_address": { 00:28:17.734 "trtype": "TCP", 00:28:17.734 "adrfam": "IPv4", 00:28:17.734 "traddr": "10.0.0.1", 00:28:17.734 "trsvcid": "49922" 00:28:17.734 }, 00:28:17.734 "auth": { 00:28:17.734 "state": "completed", 00:28:17.734 "digest": "sha256", 00:28:17.734 "dhgroup": "ffdhe4096" 00:28:17.734 } 00:28:17.734 } 00:28:17.734 ]' 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:17.734 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:17.993 08:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:19.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:19.369 08:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.937 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.505 00:28:20.505 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:20.505 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:20.505 08:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:21.072 { 00:28:21.072 "cntlid": 29, 00:28:21.072 "qid": 0, 00:28:21.072 "state": "enabled", 00:28:21.072 "thread": "nvmf_tgt_poll_group_000", 00:28:21.072 "listen_address": { 00:28:21.072 "trtype": "TCP", 00:28:21.072 "adrfam": "IPv4", 00:28:21.072 "traddr": "10.0.0.2", 00:28:21.072 "trsvcid": "4420" 00:28:21.072 }, 00:28:21.072 "peer_address": { 00:28:21.072 "trtype": "TCP", 00:28:21.072 "adrfam": "IPv4", 00:28:21.072 "traddr": "10.0.0.1", 00:28:21.072 "trsvcid": "49958" 00:28:21.072 }, 00:28:21.072 "auth": { 00:28:21.072 "state": "completed", 00:28:21.072 "digest": "sha256", 00:28:21.072 "dhgroup": "ffdhe4096" 00:28:21.072 } 00:28:21.072 } 00:28:21.072 ]' 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:21.072 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:21.329 08:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:22.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:22.703 08:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:22.961 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:28:22.961 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:22.961 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:22.961 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:22.961 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:22.961 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:22.962 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:22.962 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.962 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.962 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.962 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:22.962 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:23.528 00:28:23.528 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:23.528 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:23.528 08:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:23.786 { 00:28:23.786 "cntlid": 31, 00:28:23.786 "qid": 0, 00:28:23.786 "state": "enabled", 00:28:23.786 "thread": "nvmf_tgt_poll_group_000", 00:28:23.786 "listen_address": { 00:28:23.786 "trtype": "TCP", 00:28:23.786 "adrfam": "IPv4", 00:28:23.786 "traddr": "10.0.0.2", 00:28:23.786 "trsvcid": "4420" 00:28:23.786 }, 00:28:23.786 "peer_address": { 00:28:23.786 "trtype": "TCP", 00:28:23.786 "adrfam": "IPv4", 00:28:23.786 "traddr": "10.0.0.1", 00:28:23.786 "trsvcid": "49990" 00:28:23.786 }, 00:28:23.786 "auth": { 00:28:23.786 "state": "completed", 00:28:23.786 "digest": "sha256", 00:28:23.786 "dhgroup": "ffdhe4096" 00:28:23.786 } 00:28:23.786 } 00:28:23.786 ]' 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:23.786 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:24.352 08:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:25.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:25.727 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.294 08:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.860 00:28:26.860 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:26.860 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:26.861 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:27.427 { 00:28:27.427 "cntlid": 33, 00:28:27.427 "qid": 0, 00:28:27.427 "state": "enabled", 00:28:27.427 "thread": "nvmf_tgt_poll_group_000", 00:28:27.427 "listen_address": { 00:28:27.427 "trtype": "TCP", 00:28:27.427 "adrfam": "IPv4", 00:28:27.427 "traddr": "10.0.0.2", 00:28:27.427 "trsvcid": "4420" 00:28:27.427 }, 00:28:27.427 "peer_address": { 00:28:27.427 "trtype": "TCP", 00:28:27.427 "adrfam": "IPv4", 00:28:27.427 "traddr": "10.0.0.1", 00:28:27.427 "trsvcid": "45144" 00:28:27.427 }, 00:28:27.427 "auth": { 00:28:27.427 "state": "completed", 00:28:27.427 "digest": "sha256", 00:28:27.427 "dhgroup": "ffdhe6144" 00:28:27.427 } 00:28:27.427 } 00:28:27.427 ]' 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:27.427 08:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:27.685 08:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:29.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.586 08:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.844 08:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.778 00:28:30.778 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:30.778 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:30.778 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:31.036 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.036 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:31.036 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.036 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.036 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.036 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:31.036 { 00:28:31.036 "cntlid": 35, 00:28:31.036 "qid": 0, 00:28:31.036 "state": "enabled", 00:28:31.036 "thread": "nvmf_tgt_poll_group_000", 00:28:31.036 "listen_address": { 00:28:31.036 "trtype": "TCP", 00:28:31.036 "adrfam": "IPv4", 00:28:31.036 "traddr": "10.0.0.2", 00:28:31.036 "trsvcid": "4420" 00:28:31.036 }, 00:28:31.036 "peer_address": { 00:28:31.036 "trtype": "TCP", 00:28:31.036 "adrfam": "IPv4", 00:28:31.036 "traddr": "10.0.0.1", 00:28:31.036 "trsvcid": "45174" 00:28:31.036 }, 00:28:31.036 "auth": { 00:28:31.036 "state": "completed", 00:28:31.036 "digest": "sha256", 00:28:31.036 "dhgroup": "ffdhe6144" 00:28:31.036 } 00:28:31.036 } 00:28:31.036 ]' 00:28:31.036 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:31.294 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:31.294 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:31.294 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:31.294 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:31.294 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:31.294 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:31.294 08:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:31.860 08:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:33.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.789 08:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.789 08:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.724 00:28:34.724 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:34.724 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:34.724 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:35.291 { 00:28:35.291 "cntlid": 37, 00:28:35.291 "qid": 0, 00:28:35.291 "state": "enabled", 00:28:35.291 "thread": "nvmf_tgt_poll_group_000", 00:28:35.291 "listen_address": { 00:28:35.291 "trtype": "TCP", 00:28:35.291 "adrfam": "IPv4", 00:28:35.291 "traddr": "10.0.0.2", 00:28:35.291 "trsvcid": "4420" 00:28:35.291 }, 00:28:35.291 "peer_address": { 00:28:35.291 "trtype": "TCP", 00:28:35.291 "adrfam": "IPv4", 00:28:35.291 "traddr": "10.0.0.1", 00:28:35.291 "trsvcid": "45204" 00:28:35.291 }, 00:28:35.291 "auth": { 00:28:35.291 "state": "completed", 00:28:35.291 "digest": "sha256", 00:28:35.291 "dhgroup": "ffdhe6144" 00:28:35.291 } 00:28:35.291 } 00:28:35.291 ]' 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:35.291 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:35.549 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:35.549 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:35.549 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:35.549 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:35.549 08:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:36.116 08:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:38.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:38.017 08:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:38.951 00:28:38.951 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:38.951 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:38.951 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:39.210 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.210 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:39.210 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.210 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.210 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.210 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:39.210 { 00:28:39.210 "cntlid": 39, 00:28:39.210 "qid": 0, 00:28:39.210 "state": "enabled", 00:28:39.210 "thread": "nvmf_tgt_poll_group_000", 00:28:39.210 "listen_address": { 00:28:39.210 "trtype": "TCP", 00:28:39.210 "adrfam": "IPv4", 00:28:39.210 "traddr": "10.0.0.2", 00:28:39.210 "trsvcid": "4420" 00:28:39.210 }, 00:28:39.210 "peer_address": { 00:28:39.210 "trtype": "TCP", 00:28:39.210 "adrfam": "IPv4", 00:28:39.210 "traddr": "10.0.0.1", 00:28:39.210 "trsvcid": "47654" 00:28:39.210 }, 00:28:39.210 "auth": { 00:28:39.210 "state": "completed", 00:28:39.210 "digest": "sha256", 00:28:39.210 "dhgroup": "ffdhe6144" 00:28:39.210 } 00:28:39.210 } 00:28:39.210 ]' 00:28:39.210 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:39.467 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:39.467 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:39.467 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:39.467 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:39.467 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:39.467 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:39.467 08:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:40.033 08:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:41.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.942 08:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.316 00:28:43.316 08:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:43.316 08:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:43.316 08:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:43.574 { 00:28:43.574 "cntlid": 41, 00:28:43.574 "qid": 0, 00:28:43.574 "state": "enabled", 00:28:43.574 "thread": "nvmf_tgt_poll_group_000", 00:28:43.574 "listen_address": { 00:28:43.574 "trtype": "TCP", 00:28:43.574 "adrfam": "IPv4", 00:28:43.574 "traddr": "10.0.0.2", 00:28:43.574 "trsvcid": "4420" 00:28:43.574 }, 00:28:43.574 "peer_address": { 00:28:43.574 "trtype": "TCP", 00:28:43.574 "adrfam": "IPv4", 00:28:43.574 "traddr": "10.0.0.1", 00:28:43.574 "trsvcid": "47686" 00:28:43.574 }, 00:28:43.574 "auth": { 00:28:43.574 "state": "completed", 00:28:43.574 "digest": "sha256", 00:28:43.574 "dhgroup": "ffdhe8192" 00:28:43.574 } 00:28:43.574 } 00:28:43.574 ]' 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:43.574 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:43.832 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:43.832 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:43.832 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:43.832 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:43.832 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:44.091 08:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:45.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:45.991 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.249 08:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.621 00:28:47.621 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:47.621 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:47.621 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:48.187 { 00:28:48.187 "cntlid": 43, 00:28:48.187 "qid": 0, 00:28:48.187 "state": "enabled", 00:28:48.187 "thread": "nvmf_tgt_poll_group_000", 00:28:48.187 "listen_address": { 00:28:48.187 "trtype": "TCP", 00:28:48.187 "adrfam": "IPv4", 00:28:48.187 "traddr": "10.0.0.2", 00:28:48.187 "trsvcid": "4420" 00:28:48.187 }, 00:28:48.187 "peer_address": { 00:28:48.187 "trtype": "TCP", 00:28:48.187 "adrfam": "IPv4", 00:28:48.187 "traddr": "10.0.0.1", 00:28:48.187 "trsvcid": "44504" 00:28:48.187 }, 00:28:48.187 "auth": { 00:28:48.187 "state": "completed", 00:28:48.187 "digest": "sha256", 00:28:48.187 "dhgroup": "ffdhe8192" 00:28:48.187 } 00:28:48.187 } 00:28:48.187 ]' 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:48.187 08:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:48.753 08:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:50.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:50.152 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.719 08:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.719 08:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.719 08:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.719 08:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.091 00:28:52.091 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:52.091 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:52.091 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:52.656 { 00:28:52.656 "cntlid": 45, 00:28:52.656 "qid": 0, 00:28:52.656 "state": "enabled", 00:28:52.656 "thread": "nvmf_tgt_poll_group_000", 00:28:52.656 "listen_address": { 00:28:52.656 "trtype": "TCP", 00:28:52.656 "adrfam": "IPv4", 00:28:52.656 "traddr": "10.0.0.2", 00:28:52.656 "trsvcid": "4420" 00:28:52.656 }, 00:28:52.656 "peer_address": { 00:28:52.656 "trtype": "TCP", 00:28:52.656 "adrfam": "IPv4", 00:28:52.656 "traddr": "10.0.0.1", 00:28:52.656 "trsvcid": "44526" 00:28:52.656 }, 00:28:52.656 "auth": { 00:28:52.656 "state": "completed", 00:28:52.656 "digest": "sha256", 00:28:52.656 "dhgroup": "ffdhe8192" 00:28:52.656 } 00:28:52.656 } 00:28:52.656 ]' 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:52.656 08:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:52.656 08:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:52.656 08:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:52.656 08:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:52.656 08:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:52.656 08:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:52.656 08:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:53.221 08:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:54.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:54.594 08:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:54.852 08:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:56.752 00:28:56.752 08:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:56.752 08:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:56.752 08:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:56.752 { 00:28:56.752 "cntlid": 47, 00:28:56.752 "qid": 0, 00:28:56.752 "state": "enabled", 00:28:56.752 "thread": "nvmf_tgt_poll_group_000", 00:28:56.752 "listen_address": { 00:28:56.752 "trtype": "TCP", 00:28:56.752 "adrfam": "IPv4", 00:28:56.752 "traddr": "10.0.0.2", 00:28:56.752 "trsvcid": "4420" 00:28:56.752 }, 00:28:56.752 "peer_address": { 00:28:56.752 "trtype": "TCP", 00:28:56.752 "adrfam": "IPv4", 00:28:56.752 "traddr": "10.0.0.1", 00:28:56.752 "trsvcid": "53244" 00:28:56.752 }, 00:28:56.752 "auth": { 00:28:56.752 "state": "completed", 00:28:56.752 "digest": "sha256", 00:28:56.752 "dhgroup": "ffdhe8192" 00:28:56.752 } 00:28:56.752 } 00:28:56.752 ]' 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:56.752 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:57.010 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:57.010 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:57.010 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:57.010 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:57.010 08:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:57.577 08:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:58.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:58.958 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.528 08:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.098 00:29:00.098 08:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:00.098 08:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:00.098 08:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:00.668 { 00:29:00.668 "cntlid": 49, 00:29:00.668 "qid": 0, 00:29:00.668 "state": "enabled", 00:29:00.668 "thread": "nvmf_tgt_poll_group_000", 00:29:00.668 "listen_address": { 00:29:00.668 "trtype": "TCP", 00:29:00.668 "adrfam": "IPv4", 00:29:00.668 "traddr": "10.0.0.2", 00:29:00.668 "trsvcid": "4420" 00:29:00.668 }, 00:29:00.668 "peer_address": { 00:29:00.668 "trtype": "TCP", 00:29:00.668 "adrfam": "IPv4", 00:29:00.668 "traddr": "10.0.0.1", 00:29:00.668 "trsvcid": "53266" 00:29:00.668 }, 00:29:00.668 "auth": { 00:29:00.668 "state": "completed", 00:29:00.668 "digest": "sha384", 00:29:00.668 "dhgroup": "null" 00:29:00.668 } 00:29:00.668 } 00:29:00.668 ]' 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:00.668 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:00.934 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:00.934 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:00.934 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:00.934 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:00.934 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:01.508 08:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:02.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:02.892 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.462 08:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.054 00:29:04.054 08:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:04.054 08:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:04.054 08:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:04.656 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.656 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:04.656 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.656 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.656 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.656 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:04.656 { 00:29:04.656 "cntlid": 51, 00:29:04.656 "qid": 0, 00:29:04.656 "state": "enabled", 00:29:04.656 "thread": "nvmf_tgt_poll_group_000", 00:29:04.656 "listen_address": { 00:29:04.656 "trtype": "TCP", 00:29:04.656 "adrfam": "IPv4", 00:29:04.656 "traddr": "10.0.0.2", 00:29:04.656 "trsvcid": "4420" 00:29:04.656 }, 00:29:04.656 "peer_address": { 00:29:04.656 "trtype": "TCP", 00:29:04.656 "adrfam": "IPv4", 00:29:04.656 "traddr": "10.0.0.1", 00:29:04.656 "trsvcid": "53304" 00:29:04.656 }, 00:29:04.656 "auth": { 00:29:04.656 "state": "completed", 00:29:04.656 "digest": "sha384", 00:29:04.656 "dhgroup": "null" 00:29:04.656 } 00:29:04.656 } 00:29:04.656 ]' 00:29:04.656 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:04.917 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:04.917 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:04.917 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:04.917 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:04.917 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:04.917 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:04.917 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:05.177 08:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:29:06.555 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:06.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:06.555 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:06.555 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.555 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:06.555 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.555 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:06.555 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:06.556 08:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.815 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.386 00:29:07.386 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:07.386 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:07.386 08:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:07.956 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.956 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:07.956 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.956 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.956 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.956 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:07.956 { 00:29:07.956 "cntlid": 53, 00:29:07.956 "qid": 0, 00:29:07.956 "state": "enabled", 00:29:07.956 "thread": "nvmf_tgt_poll_group_000", 00:29:07.956 "listen_address": { 00:29:07.956 "trtype": "TCP", 00:29:07.956 "adrfam": "IPv4", 00:29:07.956 "traddr": "10.0.0.2", 00:29:07.956 "trsvcid": "4420" 00:29:07.956 }, 00:29:07.956 "peer_address": { 00:29:07.956 "trtype": "TCP", 00:29:07.956 "adrfam": "IPv4", 00:29:07.956 "traddr": "10.0.0.1", 00:29:07.956 "trsvcid": "60416" 00:29:07.956 }, 00:29:07.956 "auth": { 00:29:07.956 "state": "completed", 00:29:07.956 "digest": "sha384", 00:29:07.956 "dhgroup": "null" 00:29:07.956 } 00:29:07.956 } 00:29:07.956 ]' 00:29:07.956 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:08.216 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:08.216 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:08.216 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:08.216 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:08.216 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:08.216 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:08.216 08:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:09.156 08:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:10.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:10.538 08:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:11.108 08:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:11.675 00:29:11.675 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:11.675 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:11.675 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:12.241 { 00:29:12.241 "cntlid": 55, 00:29:12.241 "qid": 0, 00:29:12.241 "state": "enabled", 00:29:12.241 "thread": "nvmf_tgt_poll_group_000", 00:29:12.241 "listen_address": { 00:29:12.241 "trtype": "TCP", 00:29:12.241 "adrfam": "IPv4", 00:29:12.241 "traddr": "10.0.0.2", 00:29:12.241 "trsvcid": "4420" 00:29:12.241 }, 00:29:12.241 "peer_address": { 00:29:12.241 "trtype": "TCP", 00:29:12.241 "adrfam": "IPv4", 00:29:12.241 "traddr": "10.0.0.1", 00:29:12.241 "trsvcid": "60428" 00:29:12.241 }, 00:29:12.241 "auth": { 00:29:12.241 "state": "completed", 00:29:12.241 "digest": "sha384", 00:29:12.241 "dhgroup": "null" 00:29:12.241 } 00:29:12.241 } 00:29:12.241 ]' 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:12.241 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:12.500 08:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:13.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:13.880 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:14.450 08:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:15.016 00:29:15.016 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:15.016 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:15.016 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:15.273 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.273 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:15.273 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.273 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.273 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.273 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:15.273 { 00:29:15.273 "cntlid": 57, 00:29:15.273 "qid": 0, 00:29:15.273 "state": "enabled", 00:29:15.273 "thread": "nvmf_tgt_poll_group_000", 00:29:15.274 "listen_address": { 00:29:15.274 "trtype": "TCP", 00:29:15.274 "adrfam": "IPv4", 00:29:15.274 "traddr": "10.0.0.2", 00:29:15.274 "trsvcid": "4420" 00:29:15.274 }, 00:29:15.274 "peer_address": { 00:29:15.274 "trtype": "TCP", 00:29:15.274 "adrfam": "IPv4", 00:29:15.274 "traddr": "10.0.0.1", 00:29:15.274 "trsvcid": "60456" 00:29:15.274 }, 00:29:15.274 "auth": { 00:29:15.274 "state": "completed", 00:29:15.274 "digest": "sha384", 00:29:15.274 "dhgroup": "ffdhe2048" 00:29:15.274 } 00:29:15.274 } 00:29:15.274 ]' 00:29:15.274 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:15.274 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:15.274 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:15.274 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:15.274 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:15.531 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:15.531 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:15.531 08:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:15.789 08:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:29:17.165 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:17.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:17.166 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:17.166 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.166 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.166 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.166 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:17.166 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:17.166 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.426 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.427 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.427 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.427 08:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.996 00:29:17.996 08:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:17.996 08:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:17.996 08:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:18.934 { 00:29:18.934 "cntlid": 59, 00:29:18.934 "qid": 0, 00:29:18.934 "state": "enabled", 00:29:18.934 "thread": "nvmf_tgt_poll_group_000", 00:29:18.934 "listen_address": { 00:29:18.934 "trtype": "TCP", 00:29:18.934 "adrfam": "IPv4", 00:29:18.934 "traddr": "10.0.0.2", 00:29:18.934 "trsvcid": "4420" 00:29:18.934 }, 00:29:18.934 "peer_address": { 00:29:18.934 "trtype": "TCP", 00:29:18.934 "adrfam": "IPv4", 00:29:18.934 "traddr": "10.0.0.1", 00:29:18.934 "trsvcid": "49814" 00:29:18.934 }, 00:29:18.934 "auth": { 00:29:18.934 "state": "completed", 00:29:18.934 "digest": "sha384", 00:29:18.934 "dhgroup": "ffdhe2048" 00:29:18.934 } 00:29:18.934 } 00:29:18.934 ]' 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:18.934 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:19.504 08:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:20.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.880 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:21.448 00:29:21.448 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:21.448 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:21.448 08:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:21.707 { 00:29:21.707 "cntlid": 61, 00:29:21.707 "qid": 0, 00:29:21.707 "state": "enabled", 00:29:21.707 "thread": "nvmf_tgt_poll_group_000", 00:29:21.707 "listen_address": { 00:29:21.707 "trtype": "TCP", 00:29:21.707 "adrfam": "IPv4", 00:29:21.707 "traddr": "10.0.0.2", 00:29:21.707 "trsvcid": "4420" 00:29:21.707 }, 00:29:21.707 "peer_address": { 00:29:21.707 "trtype": "TCP", 00:29:21.707 "adrfam": "IPv4", 00:29:21.707 "traddr": "10.0.0.1", 00:29:21.707 "trsvcid": "49856" 00:29:21.707 }, 00:29:21.707 "auth": { 00:29:21.707 "state": "completed", 00:29:21.707 "digest": "sha384", 00:29:21.707 "dhgroup": "ffdhe2048" 00:29:21.707 } 00:29:21.707 } 00:29:21.707 ]' 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:21.707 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:22.274 08:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:23.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:23.654 08:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:24.224 08:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:24.793 00:29:24.793 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:24.793 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:24.793 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:25.053 { 00:29:25.053 "cntlid": 63, 00:29:25.053 "qid": 0, 00:29:25.053 "state": "enabled", 00:29:25.053 "thread": "nvmf_tgt_poll_group_000", 00:29:25.053 "listen_address": { 00:29:25.053 "trtype": "TCP", 00:29:25.053 "adrfam": "IPv4", 00:29:25.053 "traddr": "10.0.0.2", 00:29:25.053 "trsvcid": "4420" 00:29:25.053 }, 00:29:25.053 "peer_address": { 00:29:25.053 "trtype": "TCP", 00:29:25.053 "adrfam": "IPv4", 00:29:25.053 "traddr": "10.0.0.1", 00:29:25.053 "trsvcid": "49886" 00:29:25.053 }, 00:29:25.053 "auth": { 00:29:25.053 "state": "completed", 00:29:25.053 "digest": "sha384", 00:29:25.053 "dhgroup": "ffdhe2048" 00:29:25.053 } 00:29:25.053 } 00:29:25.053 ]' 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:25.053 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:25.313 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:25.313 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:25.313 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:25.313 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:25.313 08:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:25.883 08:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:27.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.264 08:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.834 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:28.403 00:29:28.403 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:28.403 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:28.404 08:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:28.974 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.974 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:28.974 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.974 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.974 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.974 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:28.974 { 00:29:28.974 "cntlid": 65, 00:29:28.974 "qid": 0, 00:29:28.974 "state": "enabled", 00:29:28.974 "thread": "nvmf_tgt_poll_group_000", 00:29:28.974 "listen_address": { 00:29:28.974 "trtype": "TCP", 00:29:28.974 "adrfam": "IPv4", 00:29:28.974 "traddr": "10.0.0.2", 00:29:28.974 "trsvcid": "4420" 00:29:28.974 }, 00:29:28.974 "peer_address": { 00:29:28.974 "trtype": "TCP", 00:29:28.974 "adrfam": "IPv4", 00:29:28.974 "traddr": "10.0.0.1", 00:29:28.974 "trsvcid": "56774" 00:29:28.974 }, 00:29:28.974 "auth": { 00:29:28.974 "state": "completed", 00:29:28.974 "digest": "sha384", 00:29:28.974 "dhgroup": "ffdhe3072" 00:29:28.974 } 00:29:28.974 } 00:29:28.974 ]' 00:29:28.974 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:29.233 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:29.233 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:29.233 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:29.233 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:29.233 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:29.233 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:29.233 08:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:30.174 08:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:31.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:31.550 08:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.808 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.066 00:29:32.066 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:32.066 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:32.066 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:32.325 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.325 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:32.325 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.325 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.585 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.585 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:32.585 { 00:29:32.585 "cntlid": 67, 00:29:32.585 "qid": 0, 00:29:32.585 "state": "enabled", 00:29:32.585 "thread": "nvmf_tgt_poll_group_000", 00:29:32.585 "listen_address": { 00:29:32.585 "trtype": "TCP", 00:29:32.585 "adrfam": "IPv4", 00:29:32.585 "traddr": "10.0.0.2", 00:29:32.585 "trsvcid": "4420" 00:29:32.585 }, 00:29:32.585 "peer_address": { 00:29:32.585 "trtype": "TCP", 00:29:32.585 "adrfam": "IPv4", 00:29:32.585 "traddr": "10.0.0.1", 00:29:32.585 "trsvcid": "56804" 00:29:32.585 }, 00:29:32.585 "auth": { 00:29:32.585 "state": "completed", 00:29:32.585 "digest": "sha384", 00:29:32.585 "dhgroup": "ffdhe3072" 00:29:32.585 } 00:29:32.585 } 00:29:32.585 ]' 00:29:32.585 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:32.585 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:32.585 08:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:32.585 08:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:32.585 08:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:32.844 08:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:32.844 08:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:32.844 08:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:33.102 08:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:34.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.495 08:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.753 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.754 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.754 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.319 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:35.319 { 00:29:35.319 "cntlid": 69, 00:29:35.319 "qid": 0, 00:29:35.319 "state": "enabled", 00:29:35.319 "thread": "nvmf_tgt_poll_group_000", 00:29:35.319 "listen_address": { 00:29:35.319 "trtype": "TCP", 00:29:35.319 "adrfam": "IPv4", 00:29:35.319 "traddr": "10.0.0.2", 00:29:35.319 "trsvcid": "4420" 00:29:35.319 }, 00:29:35.319 "peer_address": { 00:29:35.319 "trtype": "TCP", 00:29:35.319 "adrfam": "IPv4", 00:29:35.319 "traddr": "10.0.0.1", 00:29:35.319 "trsvcid": "56820" 00:29:35.319 }, 00:29:35.319 "auth": { 00:29:35.319 "state": "completed", 00:29:35.319 "digest": "sha384", 00:29:35.319 "dhgroup": "ffdhe3072" 00:29:35.319 } 00:29:35.319 } 00:29:35.319 ]' 00:29:35.319 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:35.578 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:35.578 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:35.578 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:35.578 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:35.578 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:35.578 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:35.578 08:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:36.148 08:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:37.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:37.531 08:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:38.101 08:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:38.669 00:29:38.669 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:38.669 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:38.669 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:38.927 { 00:29:38.927 "cntlid": 71, 00:29:38.927 "qid": 0, 00:29:38.927 "state": "enabled", 00:29:38.927 "thread": "nvmf_tgt_poll_group_000", 00:29:38.927 "listen_address": { 00:29:38.927 "trtype": "TCP", 00:29:38.927 "adrfam": "IPv4", 00:29:38.927 "traddr": "10.0.0.2", 00:29:38.927 "trsvcid": "4420" 00:29:38.927 }, 00:29:38.927 "peer_address": { 00:29:38.927 "trtype": "TCP", 00:29:38.927 "adrfam": "IPv4", 00:29:38.927 "traddr": "10.0.0.1", 00:29:38.927 "trsvcid": "58824" 00:29:38.927 }, 00:29:38.927 "auth": { 00:29:38.927 "state": "completed", 00:29:38.927 "digest": "sha384", 00:29:38.927 "dhgroup": "ffdhe3072" 00:29:38.927 } 00:29:38.927 } 00:29:38.927 ]' 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:38.927 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:39.187 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:39.187 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:39.187 08:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:39.757 08:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:29:41.135 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:41.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:41.136 08:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:41.704 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.273 00:29:42.274 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:42.274 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:42.274 08:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:43.213 { 00:29:43.213 "cntlid": 73, 00:29:43.213 "qid": 0, 00:29:43.213 "state": "enabled", 00:29:43.213 "thread": "nvmf_tgt_poll_group_000", 00:29:43.213 "listen_address": { 00:29:43.213 "trtype": "TCP", 00:29:43.213 "adrfam": "IPv4", 00:29:43.213 "traddr": "10.0.0.2", 00:29:43.213 "trsvcid": "4420" 00:29:43.213 }, 00:29:43.213 "peer_address": { 00:29:43.213 "trtype": "TCP", 00:29:43.213 "adrfam": "IPv4", 00:29:43.213 "traddr": "10.0.0.1", 00:29:43.213 "trsvcid": "58848" 00:29:43.213 }, 00:29:43.213 "auth": { 00:29:43.213 "state": "completed", 00:29:43.213 "digest": "sha384", 00:29:43.213 "dhgroup": "ffdhe4096" 00:29:43.213 } 00:29:43.213 } 00:29:43.213 ]' 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:43.213 08:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:43.783 08:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:45.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.164 08:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.736 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.307 00:29:46.307 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:46.307 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:46.307 08:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:47.244 { 00:29:47.244 "cntlid": 75, 00:29:47.244 "qid": 0, 00:29:47.244 "state": "enabled", 00:29:47.244 "thread": "nvmf_tgt_poll_group_000", 00:29:47.244 "listen_address": { 00:29:47.244 "trtype": "TCP", 00:29:47.244 "adrfam": "IPv4", 00:29:47.244 "traddr": "10.0.0.2", 00:29:47.244 "trsvcid": "4420" 00:29:47.244 }, 00:29:47.244 "peer_address": { 00:29:47.244 "trtype": "TCP", 00:29:47.244 "adrfam": "IPv4", 00:29:47.244 "traddr": "10.0.0.1", 00:29:47.244 "trsvcid": "50076" 00:29:47.244 }, 00:29:47.244 "auth": { 00:29:47.244 "state": "completed", 00:29:47.244 "digest": "sha384", 00:29:47.244 "dhgroup": "ffdhe4096" 00:29:47.244 } 00:29:47.244 } 00:29:47.244 ]' 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:47.244 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:47.502 08:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:48.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.875 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:49.136 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.136 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:49.136 08:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:49.717 00:29:49.717 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:49.717 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:49.717 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:50.281 { 00:29:50.281 "cntlid": 77, 00:29:50.281 "qid": 0, 00:29:50.281 "state": "enabled", 00:29:50.281 "thread": "nvmf_tgt_poll_group_000", 00:29:50.281 "listen_address": { 00:29:50.281 "trtype": "TCP", 00:29:50.281 "adrfam": "IPv4", 00:29:50.281 "traddr": "10.0.0.2", 00:29:50.281 "trsvcid": "4420" 00:29:50.281 }, 00:29:50.281 "peer_address": { 00:29:50.281 "trtype": "TCP", 00:29:50.281 "adrfam": "IPv4", 00:29:50.281 "traddr": "10.0.0.1", 00:29:50.281 "trsvcid": "50116" 00:29:50.281 }, 00:29:50.281 "auth": { 00:29:50.281 "state": "completed", 00:29:50.281 "digest": "sha384", 00:29:50.281 "dhgroup": "ffdhe4096" 00:29:50.281 } 00:29:50.281 } 00:29:50.281 ]' 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:50.281 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:50.538 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:50.538 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:50.538 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:50.538 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:50.538 08:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:51.103 08:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:29:52.473 08:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:52.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:52.732 08:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:52.732 08:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.732 08:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.732 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.732 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:52.732 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:52.732 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:52.991 08:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:53.925 00:29:53.925 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:53.925 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:53.925 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:54.184 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.184 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:54.184 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.184 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.184 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.184 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:54.184 { 00:29:54.184 "cntlid": 79, 00:29:54.184 "qid": 0, 00:29:54.184 "state": "enabled", 00:29:54.184 "thread": "nvmf_tgt_poll_group_000", 00:29:54.184 "listen_address": { 00:29:54.184 "trtype": "TCP", 00:29:54.184 "adrfam": "IPv4", 00:29:54.184 "traddr": "10.0.0.2", 00:29:54.184 "trsvcid": "4420" 00:29:54.184 }, 00:29:54.184 "peer_address": { 00:29:54.184 "trtype": "TCP", 00:29:54.184 "adrfam": "IPv4", 00:29:54.184 "traddr": "10.0.0.1", 00:29:54.184 "trsvcid": "50146" 00:29:54.184 }, 00:29:54.184 "auth": { 00:29:54.184 "state": "completed", 00:29:54.184 "digest": "sha384", 00:29:54.184 "dhgroup": "ffdhe4096" 00:29:54.184 } 00:29:54.184 } 00:29:54.184 ]' 00:29:54.184 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:54.443 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:54.443 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:54.443 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:54.443 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:54.443 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:54.443 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:54.443 08:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:54.702 08:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:56.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:56.604 08:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.863 08:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.799 00:29:57.799 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:57.799 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:57.799 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:58.057 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.057 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:58.057 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.057 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.058 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.058 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:58.058 { 00:29:58.058 "cntlid": 81, 00:29:58.058 "qid": 0, 00:29:58.058 "state": "enabled", 00:29:58.058 "thread": "nvmf_tgt_poll_group_000", 00:29:58.058 "listen_address": { 00:29:58.058 "trtype": "TCP", 00:29:58.058 "adrfam": "IPv4", 00:29:58.058 "traddr": "10.0.0.2", 00:29:58.058 "trsvcid": "4420" 00:29:58.058 }, 00:29:58.058 "peer_address": { 00:29:58.058 "trtype": "TCP", 00:29:58.058 "adrfam": "IPv4", 00:29:58.058 "traddr": "10.0.0.1", 00:29:58.058 "trsvcid": "54070" 00:29:58.058 }, 00:29:58.058 "auth": { 00:29:58.058 "state": "completed", 00:29:58.058 "digest": "sha384", 00:29:58.058 "dhgroup": "ffdhe6144" 00:29:58.058 } 00:29:58.058 } 00:29:58.058 ]' 00:29:58.058 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:58.316 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:58.316 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:58.316 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:58.316 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:58.316 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:58.316 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:58.316 08:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:58.883 08:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:00.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:00.257 08:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:00.823 08:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:01.757 00:30:01.757 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:01.757 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:01.757 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:02.016 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.016 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:02.016 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.016 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:02.016 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:02.274 { 00:30:02.274 "cntlid": 83, 00:30:02.274 "qid": 0, 00:30:02.274 "state": "enabled", 00:30:02.274 "thread": "nvmf_tgt_poll_group_000", 00:30:02.274 "listen_address": { 00:30:02.274 "trtype": "TCP", 00:30:02.274 "adrfam": "IPv4", 00:30:02.274 "traddr": "10.0.0.2", 00:30:02.274 "trsvcid": "4420" 00:30:02.274 }, 00:30:02.274 "peer_address": { 00:30:02.274 "trtype": "TCP", 00:30:02.274 "adrfam": "IPv4", 00:30:02.274 "traddr": "10.0.0.1", 00:30:02.274 "trsvcid": "54094" 00:30:02.274 }, 00:30:02.274 "auth": { 00:30:02.274 "state": "completed", 00:30:02.274 "digest": "sha384", 00:30:02.274 "dhgroup": "ffdhe6144" 00:30:02.274 } 00:30:02.274 } 00:30:02.274 ]' 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:02.274 08:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:02.840 08:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:30:04.215 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:04.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:04.215 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:04.215 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.215 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:04.216 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.216 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:04.216 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:04.216 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:04.488 08:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.434 00:30:05.434 08:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:05.434 08:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:05.434 08:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:05.692 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.692 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:05.692 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.692 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.692 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.692 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:05.692 { 00:30:05.692 "cntlid": 85, 00:30:05.692 "qid": 0, 00:30:05.692 "state": "enabled", 00:30:05.692 "thread": "nvmf_tgt_poll_group_000", 00:30:05.692 "listen_address": { 00:30:05.692 "trtype": "TCP", 00:30:05.692 "adrfam": "IPv4", 00:30:05.692 "traddr": "10.0.0.2", 00:30:05.692 "trsvcid": "4420" 00:30:05.692 }, 00:30:05.692 "peer_address": { 00:30:05.692 "trtype": "TCP", 00:30:05.692 "adrfam": "IPv4", 00:30:05.692 "traddr": "10.0.0.1", 00:30:05.692 "trsvcid": "54118" 00:30:05.692 }, 00:30:05.692 "auth": { 00:30:05.692 "state": "completed", 00:30:05.692 "digest": "sha384", 00:30:05.692 "dhgroup": "ffdhe6144" 00:30:05.692 } 00:30:05.692 } 00:30:05.692 ]' 00:30:05.692 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:05.951 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:05.951 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:05.951 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:05.951 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:05.951 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:05.951 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:05.951 08:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:06.517 08:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:30:07.892 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:07.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:07.892 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:07.892 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.892 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.150 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.150 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:08.150 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:08.150 08:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:08.716 08:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:09.651 00:30:09.651 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:09.651 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:09.651 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:10.217 { 00:30:10.217 "cntlid": 87, 00:30:10.217 "qid": 0, 00:30:10.217 "state": "enabled", 00:30:10.217 "thread": "nvmf_tgt_poll_group_000", 00:30:10.217 "listen_address": { 00:30:10.217 "trtype": "TCP", 00:30:10.217 "adrfam": "IPv4", 00:30:10.217 "traddr": "10.0.0.2", 00:30:10.217 "trsvcid": "4420" 00:30:10.217 }, 00:30:10.217 "peer_address": { 00:30:10.217 "trtype": "TCP", 00:30:10.217 "adrfam": "IPv4", 00:30:10.217 "traddr": "10.0.0.1", 00:30:10.217 "trsvcid": "59204" 00:30:10.217 }, 00:30:10.217 "auth": { 00:30:10.217 "state": "completed", 00:30:10.217 "digest": "sha384", 00:30:10.217 "dhgroup": "ffdhe6144" 00:30:10.217 } 00:30:10.217 } 00:30:10.217 ]' 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:10.217 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:10.476 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:10.476 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:10.476 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:10.476 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:10.476 08:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:11.042 08:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:12.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:12.941 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:13.507 08:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:14.439 00:30:14.439 08:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:14.439 08:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:14.439 08:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:15.003 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.003 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:15.003 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.003 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.003 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.003 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:15.003 { 00:30:15.003 "cntlid": 89, 00:30:15.003 "qid": 0, 00:30:15.003 "state": "enabled", 00:30:15.003 "thread": "nvmf_tgt_poll_group_000", 00:30:15.003 "listen_address": { 00:30:15.003 "trtype": "TCP", 00:30:15.003 "adrfam": "IPv4", 00:30:15.003 "traddr": "10.0.0.2", 00:30:15.003 "trsvcid": "4420" 00:30:15.003 }, 00:30:15.003 "peer_address": { 00:30:15.003 "trtype": "TCP", 00:30:15.003 "adrfam": "IPv4", 00:30:15.003 "traddr": "10.0.0.1", 00:30:15.003 "trsvcid": "59230" 00:30:15.003 }, 00:30:15.003 "auth": { 00:30:15.003 "state": "completed", 00:30:15.003 "digest": "sha384", 00:30:15.003 "dhgroup": "ffdhe8192" 00:30:15.003 } 00:30:15.003 } 00:30:15.003 ]' 00:30:15.003 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:15.261 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:15.261 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:15.261 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:15.261 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:15.261 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:15.261 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:15.261 08:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:16.194 08:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:17.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:17.127 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:17.692 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:17.693 08:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:18.627 00:30:18.627 08:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:18.627 08:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:18.627 08:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:19.194 { 00:30:19.194 "cntlid": 91, 00:30:19.194 "qid": 0, 00:30:19.194 "state": "enabled", 00:30:19.194 "thread": "nvmf_tgt_poll_group_000", 00:30:19.194 "listen_address": { 00:30:19.194 "trtype": "TCP", 00:30:19.194 "adrfam": "IPv4", 00:30:19.194 "traddr": "10.0.0.2", 00:30:19.194 "trsvcid": "4420" 00:30:19.194 }, 00:30:19.194 "peer_address": { 00:30:19.194 "trtype": "TCP", 00:30:19.194 "adrfam": "IPv4", 00:30:19.194 "traddr": "10.0.0.1", 00:30:19.194 "trsvcid": "49720" 00:30:19.194 }, 00:30:19.194 "auth": { 00:30:19.194 "state": "completed", 00:30:19.194 "digest": "sha384", 00:30:19.194 "dhgroup": "ffdhe8192" 00:30:19.194 } 00:30:19.194 } 00:30:19.194 ]' 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:19.194 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:19.455 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:19.455 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:19.455 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:19.455 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:19.455 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:19.455 08:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:20.043 08:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:21.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:21.944 08:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.202 08:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:23.576 00:30:23.576 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:23.576 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:23.576 08:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:23.834 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.834 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:23.834 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.834 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.834 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.834 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:23.834 { 00:30:23.834 "cntlid": 93, 00:30:23.834 "qid": 0, 00:30:23.834 "state": "enabled", 00:30:23.834 "thread": "nvmf_tgt_poll_group_000", 00:30:23.834 "listen_address": { 00:30:23.834 "trtype": "TCP", 00:30:23.834 "adrfam": "IPv4", 00:30:23.834 "traddr": "10.0.0.2", 00:30:23.834 "trsvcid": "4420" 00:30:23.834 }, 00:30:23.834 "peer_address": { 00:30:23.834 "trtype": "TCP", 00:30:23.834 "adrfam": "IPv4", 00:30:23.834 "traddr": "10.0.0.1", 00:30:23.834 "trsvcid": "49758" 00:30:23.834 }, 00:30:23.834 "auth": { 00:30:23.834 "state": "completed", 00:30:23.834 "digest": "sha384", 00:30:23.834 "dhgroup": "ffdhe8192" 00:30:23.834 } 00:30:23.834 } 00:30:23.834 ]' 00:30:23.834 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:24.093 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:24.093 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:24.093 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:24.093 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:24.093 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:24.093 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:24.093 08:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:25.032 08:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:25.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:25.967 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:26.533 08:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:27.908 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:27.908 { 00:30:27.908 "cntlid": 95, 00:30:27.908 "qid": 0, 00:30:27.908 "state": "enabled", 00:30:27.908 "thread": "nvmf_tgt_poll_group_000", 00:30:27.908 "listen_address": { 00:30:27.908 "trtype": "TCP", 00:30:27.908 "adrfam": "IPv4", 00:30:27.908 "traddr": "10.0.0.2", 00:30:27.908 "trsvcid": "4420" 00:30:27.908 }, 00:30:27.908 "peer_address": { 00:30:27.908 "trtype": "TCP", 00:30:27.908 "adrfam": "IPv4", 00:30:27.908 "traddr": "10.0.0.1", 00:30:27.908 "trsvcid": "43012" 00:30:27.908 }, 00:30:27.908 "auth": { 00:30:27.908 "state": "completed", 00:30:27.908 "digest": "sha384", 00:30:27.908 "dhgroup": "ffdhe8192" 00:30:27.908 } 00:30:27.908 } 00:30:27.908 ]' 00:30:27.908 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:28.167 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:30:28.167 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:28.167 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:28.167 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:28.167 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:28.167 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:28.167 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:28.732 08:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:29.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:29.668 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:30.234 08:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:30.493 00:30:30.493 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:30.493 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:30.493 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:31.058 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.058 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:31.058 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.058 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.058 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.058 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:31.058 { 00:30:31.058 "cntlid": 97, 00:30:31.058 "qid": 0, 00:30:31.058 "state": "enabled", 00:30:31.058 "thread": "nvmf_tgt_poll_group_000", 00:30:31.058 "listen_address": { 00:30:31.058 "trtype": "TCP", 00:30:31.058 "adrfam": "IPv4", 00:30:31.058 "traddr": "10.0.0.2", 00:30:31.058 "trsvcid": "4420" 00:30:31.058 }, 00:30:31.058 "peer_address": { 00:30:31.058 "trtype": "TCP", 00:30:31.058 "adrfam": "IPv4", 00:30:31.058 "traddr": "10.0.0.1", 00:30:31.058 "trsvcid": "43034" 00:30:31.058 }, 00:30:31.058 "auth": { 00:30:31.058 "state": "completed", 00:30:31.058 "digest": "sha512", 00:30:31.058 "dhgroup": "null" 00:30:31.058 } 00:30:31.058 } 00:30:31.058 ]' 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:31.059 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:31.316 08:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:32.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:32.690 08:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:32.949 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:33.207 00:30:33.207 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:33.207 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:33.207 08:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:33.774 { 00:30:33.774 "cntlid": 99, 00:30:33.774 "qid": 0, 00:30:33.774 "state": "enabled", 00:30:33.774 "thread": "nvmf_tgt_poll_group_000", 00:30:33.774 "listen_address": { 00:30:33.774 "trtype": "TCP", 00:30:33.774 "adrfam": "IPv4", 00:30:33.774 "traddr": "10.0.0.2", 00:30:33.774 "trsvcid": "4420" 00:30:33.774 }, 00:30:33.774 "peer_address": { 00:30:33.774 "trtype": "TCP", 00:30:33.774 "adrfam": "IPv4", 00:30:33.774 "traddr": "10.0.0.1", 00:30:33.774 "trsvcid": "43052" 00:30:33.774 }, 00:30:33.774 "auth": { 00:30:33.774 "state": "completed", 00:30:33.774 "digest": "sha512", 00:30:33.774 "dhgroup": "null" 00:30:33.774 } 00:30:33.774 } 00:30:33.774 ]' 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:33.774 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:34.031 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:34.031 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:34.031 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:34.031 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:34.031 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:34.601 08:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:36.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:36.003 08:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.570 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.828 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.828 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:36.828 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:37.394 00:30:37.394 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:37.394 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:37.394 08:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:37.961 { 00:30:37.961 "cntlid": 101, 00:30:37.961 "qid": 0, 00:30:37.961 "state": "enabled", 00:30:37.961 "thread": "nvmf_tgt_poll_group_000", 00:30:37.961 "listen_address": { 00:30:37.961 "trtype": "TCP", 00:30:37.961 "adrfam": "IPv4", 00:30:37.961 "traddr": "10.0.0.2", 00:30:37.961 "trsvcid": "4420" 00:30:37.961 }, 00:30:37.961 "peer_address": { 00:30:37.961 "trtype": "TCP", 00:30:37.961 "adrfam": "IPv4", 00:30:37.961 "traddr": "10.0.0.1", 00:30:37.961 "trsvcid": "40970" 00:30:37.961 }, 00:30:37.961 "auth": { 00:30:37.961 "state": "completed", 00:30:37.961 "digest": "sha512", 00:30:37.961 "dhgroup": "null" 00:30:37.961 } 00:30:37.961 } 00:30:37.961 ]' 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:37.961 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:38.219 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:38.219 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:38.219 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:38.219 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:38.219 08:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:38.785 08:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:40.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:40.159 08:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:40.725 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:41.657 00:30:41.657 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:41.657 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:41.657 08:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:42.223 { 00:30:42.223 "cntlid": 103, 00:30:42.223 "qid": 0, 00:30:42.223 "state": "enabled", 00:30:42.223 "thread": "nvmf_tgt_poll_group_000", 00:30:42.223 "listen_address": { 00:30:42.223 "trtype": "TCP", 00:30:42.223 "adrfam": "IPv4", 00:30:42.223 "traddr": "10.0.0.2", 00:30:42.223 "trsvcid": "4420" 00:30:42.223 }, 00:30:42.223 "peer_address": { 00:30:42.223 "trtype": "TCP", 00:30:42.223 "adrfam": "IPv4", 00:30:42.223 "traddr": "10.0.0.1", 00:30:42.223 "trsvcid": "40996" 00:30:42.223 }, 00:30:42.223 "auth": { 00:30:42.223 "state": "completed", 00:30:42.223 "digest": "sha512", 00:30:42.223 "dhgroup": "null" 00:30:42.223 } 00:30:42.223 } 00:30:42.223 ]' 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:30:42.223 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:42.480 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:42.480 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:42.481 08:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:42.739 08:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:44.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:44.112 08:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:44.677 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:45.614 00:30:45.614 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:45.614 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:45.614 08:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:46.182 { 00:30:46.182 "cntlid": 105, 00:30:46.182 "qid": 0, 00:30:46.182 "state": "enabled", 00:30:46.182 "thread": "nvmf_tgt_poll_group_000", 00:30:46.182 "listen_address": { 00:30:46.182 "trtype": "TCP", 00:30:46.182 "adrfam": "IPv4", 00:30:46.182 "traddr": "10.0.0.2", 00:30:46.182 "trsvcid": "4420" 00:30:46.182 }, 00:30:46.182 "peer_address": { 00:30:46.182 "trtype": "TCP", 00:30:46.182 "adrfam": "IPv4", 00:30:46.182 "traddr": "10.0.0.1", 00:30:46.182 "trsvcid": "42694" 00:30:46.182 }, 00:30:46.182 "auth": { 00:30:46.182 "state": "completed", 00:30:46.182 "digest": "sha512", 00:30:46.182 "dhgroup": "ffdhe2048" 00:30:46.182 } 00:30:46.182 } 00:30:46.182 ]' 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:46.182 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:46.441 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:46.441 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:46.441 08:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:47.008 08:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:30:48.389 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:48.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:48.389 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:48.389 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.389 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.648 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.648 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:48.648 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:48.648 08:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:49.218 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:49.478 00:30:49.478 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:49.478 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:49.478 08:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:50.049 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.049 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:50.049 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.049 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.049 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.049 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:50.049 { 00:30:50.049 "cntlid": 107, 00:30:50.049 "qid": 0, 00:30:50.049 "state": "enabled", 00:30:50.049 "thread": "nvmf_tgt_poll_group_000", 00:30:50.049 "listen_address": { 00:30:50.049 "trtype": "TCP", 00:30:50.049 "adrfam": "IPv4", 00:30:50.049 "traddr": "10.0.0.2", 00:30:50.049 "trsvcid": "4420" 00:30:50.049 }, 00:30:50.049 "peer_address": { 00:30:50.049 "trtype": "TCP", 00:30:50.049 "adrfam": "IPv4", 00:30:50.049 "traddr": "10.0.0.1", 00:30:50.049 "trsvcid": "42732" 00:30:50.049 }, 00:30:50.049 "auth": { 00:30:50.049 "state": "completed", 00:30:50.049 "digest": "sha512", 00:30:50.049 "dhgroup": "ffdhe2048" 00:30:50.049 } 00:30:50.049 } 00:30:50.049 ]' 00:30:50.049 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:50.308 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:50.308 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:50.308 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:50.308 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:50.308 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:50.308 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:50.308 08:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:51.249 08:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:52.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:52.629 08:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.199 08:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.768 00:30:53.768 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:53.768 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:53.768 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:54.706 { 00:30:54.706 "cntlid": 109, 00:30:54.706 "qid": 0, 00:30:54.706 "state": "enabled", 00:30:54.706 "thread": "nvmf_tgt_poll_group_000", 00:30:54.706 "listen_address": { 00:30:54.706 "trtype": "TCP", 00:30:54.706 "adrfam": "IPv4", 00:30:54.706 "traddr": "10.0.0.2", 00:30:54.706 "trsvcid": "4420" 00:30:54.706 }, 00:30:54.706 "peer_address": { 00:30:54.706 "trtype": "TCP", 00:30:54.706 "adrfam": "IPv4", 00:30:54.706 "traddr": "10.0.0.1", 00:30:54.706 "trsvcid": "42768" 00:30:54.706 }, 00:30:54.706 "auth": { 00:30:54.706 "state": "completed", 00:30:54.706 "digest": "sha512", 00:30:54.706 "dhgroup": "ffdhe2048" 00:30:54.706 } 00:30:54.706 } 00:30:54.706 ]' 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:54.706 08:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:54.706 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:54.706 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:54.706 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:54.706 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:54.707 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:55.276 08:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:56.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:56.654 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:57.222 08:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:30:58.162 00:30:58.162 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:30:58.162 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:30:58.162 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:58.422 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.423 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:58.423 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.423 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:58.683 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.683 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:30:58.683 { 00:30:58.683 "cntlid": 111, 00:30:58.683 "qid": 0, 00:30:58.683 "state": "enabled", 00:30:58.683 "thread": "nvmf_tgt_poll_group_000", 00:30:58.683 "listen_address": { 00:30:58.683 "trtype": "TCP", 00:30:58.683 "adrfam": "IPv4", 00:30:58.683 "traddr": "10.0.0.2", 00:30:58.683 "trsvcid": "4420" 00:30:58.683 }, 00:30:58.683 "peer_address": { 00:30:58.683 "trtype": "TCP", 00:30:58.683 "adrfam": "IPv4", 00:30:58.683 "traddr": "10.0.0.1", 00:30:58.683 "trsvcid": "42068" 00:30:58.683 }, 00:30:58.683 "auth": { 00:30:58.683 "state": "completed", 00:30:58.683 "digest": "sha512", 00:30:58.683 "dhgroup": "ffdhe2048" 00:30:58.683 } 00:30:58.683 } 00:30:58.683 ]' 00:30:58.683 08:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:30:58.683 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:58.683 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:30:58.683 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:58.683 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:30:58.942 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:58.942 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:58.942 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:59.202 08:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:00.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:00.587 08:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:01.156 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:01.416 00:31:01.416 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:01.416 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:01.416 08:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:01.984 { 00:31:01.984 "cntlid": 113, 00:31:01.984 "qid": 0, 00:31:01.984 "state": "enabled", 00:31:01.984 "thread": "nvmf_tgt_poll_group_000", 00:31:01.984 "listen_address": { 00:31:01.984 "trtype": "TCP", 00:31:01.984 "adrfam": "IPv4", 00:31:01.984 "traddr": "10.0.0.2", 00:31:01.984 "trsvcid": "4420" 00:31:01.984 }, 00:31:01.984 "peer_address": { 00:31:01.984 "trtype": "TCP", 00:31:01.984 "adrfam": "IPv4", 00:31:01.984 "traddr": "10.0.0.1", 00:31:01.984 "trsvcid": "42102" 00:31:01.984 }, 00:31:01.984 "auth": { 00:31:01.984 "state": "completed", 00:31:01.984 "digest": "sha512", 00:31:01.984 "dhgroup": "ffdhe3072" 00:31:01.984 } 00:31:01.984 } 00:31:01.984 ]' 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:01.984 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:02.243 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:02.243 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:02.243 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:02.243 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:02.243 08:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:02.812 08:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:04.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:04.190 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:04.821 08:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:05.094 00:31:05.094 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:05.094 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:05.094 08:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:05.663 { 00:31:05.663 "cntlid": 115, 00:31:05.663 "qid": 0, 00:31:05.663 "state": "enabled", 00:31:05.663 "thread": "nvmf_tgt_poll_group_000", 00:31:05.663 "listen_address": { 00:31:05.663 "trtype": "TCP", 00:31:05.663 "adrfam": "IPv4", 00:31:05.663 "traddr": "10.0.0.2", 00:31:05.663 "trsvcid": "4420" 00:31:05.663 }, 00:31:05.663 "peer_address": { 00:31:05.663 "trtype": "TCP", 00:31:05.663 "adrfam": "IPv4", 00:31:05.663 "traddr": "10.0.0.1", 00:31:05.663 "trsvcid": "42128" 00:31:05.663 }, 00:31:05.663 "auth": { 00:31:05.663 "state": "completed", 00:31:05.663 "digest": "sha512", 00:31:05.663 "dhgroup": "ffdhe3072" 00:31:05.663 } 00:31:05.663 } 00:31:05.663 ]' 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:05.663 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:05.924 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:05.924 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:05.924 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:05.924 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:05.924 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:06.493 08:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:31:07.870 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:07.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:07.870 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:07.870 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.870 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:07.870 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.870 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:07.871 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:07.871 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:08.130 08:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:09.070 00:31:09.070 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:09.070 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:09.070 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:09.329 { 00:31:09.329 "cntlid": 117, 00:31:09.329 "qid": 0, 00:31:09.329 "state": "enabled", 00:31:09.329 "thread": "nvmf_tgt_poll_group_000", 00:31:09.329 "listen_address": { 00:31:09.329 "trtype": "TCP", 00:31:09.329 "adrfam": "IPv4", 00:31:09.329 "traddr": "10.0.0.2", 00:31:09.329 "trsvcid": "4420" 00:31:09.329 }, 00:31:09.329 "peer_address": { 00:31:09.329 "trtype": "TCP", 00:31:09.329 "adrfam": "IPv4", 00:31:09.329 "traddr": "10.0.0.1", 00:31:09.329 "trsvcid": "37872" 00:31:09.329 }, 00:31:09.329 "auth": { 00:31:09.329 "state": "completed", 00:31:09.329 "digest": "sha512", 00:31:09.329 "dhgroup": "ffdhe3072" 00:31:09.329 } 00:31:09.329 } 00:31:09.329 ]' 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:09.329 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:09.588 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:09.588 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:09.588 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:09.588 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:09.588 08:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:10.157 08:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:11.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:11.536 08:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.795 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.796 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.796 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:11.796 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:12.402 00:31:12.402 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:12.402 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:12.402 08:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:12.971 { 00:31:12.971 "cntlid": 119, 00:31:12.971 "qid": 0, 00:31:12.971 "state": "enabled", 00:31:12.971 "thread": "nvmf_tgt_poll_group_000", 00:31:12.971 "listen_address": { 00:31:12.971 "trtype": "TCP", 00:31:12.971 "adrfam": "IPv4", 00:31:12.971 "traddr": "10.0.0.2", 00:31:12.971 "trsvcid": "4420" 00:31:12.971 }, 00:31:12.971 "peer_address": { 00:31:12.971 "trtype": "TCP", 00:31:12.971 "adrfam": "IPv4", 00:31:12.971 "traddr": "10.0.0.1", 00:31:12.971 "trsvcid": "37892" 00:31:12.971 }, 00:31:12.971 "auth": { 00:31:12.971 "state": "completed", 00:31:12.971 "digest": "sha512", 00:31:12.971 "dhgroup": "ffdhe3072" 00:31:12.971 } 00:31:12.971 } 00:31:12.971 ]' 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:12.971 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:13.231 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:13.231 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:13.231 08:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:13.800 08:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:15.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.181 08:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:16.120 00:31:16.120 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:16.120 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:16.120 08:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:16.690 { 00:31:16.690 "cntlid": 121, 00:31:16.690 "qid": 0, 00:31:16.690 "state": "enabled", 00:31:16.690 "thread": "nvmf_tgt_poll_group_000", 00:31:16.690 "listen_address": { 00:31:16.690 "trtype": "TCP", 00:31:16.690 "adrfam": "IPv4", 00:31:16.690 "traddr": "10.0.0.2", 00:31:16.690 "trsvcid": "4420" 00:31:16.690 }, 00:31:16.690 "peer_address": { 00:31:16.690 "trtype": "TCP", 00:31:16.690 "adrfam": "IPv4", 00:31:16.690 "traddr": "10.0.0.1", 00:31:16.690 "trsvcid": "33428" 00:31:16.690 }, 00:31:16.690 "auth": { 00:31:16.690 "state": "completed", 00:31:16.690 "digest": "sha512", 00:31:16.690 "dhgroup": "ffdhe4096" 00:31:16.690 } 00:31:16.690 } 00:31:16.690 ]' 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:16.690 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:16.950 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:16.950 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:16.950 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:16.950 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:16.950 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:17.521 08:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:19.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.430 08:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:20.403 00:31:20.403 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:20.403 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:20.403 08:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:20.984 { 00:31:20.984 "cntlid": 123, 00:31:20.984 "qid": 0, 00:31:20.984 "state": "enabled", 00:31:20.984 "thread": "nvmf_tgt_poll_group_000", 00:31:20.984 "listen_address": { 00:31:20.984 "trtype": "TCP", 00:31:20.984 "adrfam": "IPv4", 00:31:20.984 "traddr": "10.0.0.2", 00:31:20.984 "trsvcid": "4420" 00:31:20.984 }, 00:31:20.984 "peer_address": { 00:31:20.984 "trtype": "TCP", 00:31:20.984 "adrfam": "IPv4", 00:31:20.984 "traddr": "10.0.0.1", 00:31:20.984 "trsvcid": "33462" 00:31:20.984 }, 00:31:20.984 "auth": { 00:31:20.984 "state": "completed", 00:31:20.984 "digest": "sha512", 00:31:20.984 "dhgroup": "ffdhe4096" 00:31:20.984 } 00:31:20.984 } 00:31:20.984 ]' 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:20.984 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:21.244 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:21.244 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:21.244 08:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:21.814 08:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:23.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:23.196 08:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:24.136 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:24.396 00:31:24.396 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:24.396 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:24.396 08:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:24.967 { 00:31:24.967 "cntlid": 125, 00:31:24.967 "qid": 0, 00:31:24.967 "state": "enabled", 00:31:24.967 "thread": "nvmf_tgt_poll_group_000", 00:31:24.967 "listen_address": { 00:31:24.967 "trtype": "TCP", 00:31:24.967 "adrfam": "IPv4", 00:31:24.967 "traddr": "10.0.0.2", 00:31:24.967 "trsvcid": "4420" 00:31:24.967 }, 00:31:24.967 "peer_address": { 00:31:24.967 "trtype": "TCP", 00:31:24.967 "adrfam": "IPv4", 00:31:24.967 "traddr": "10.0.0.1", 00:31:24.967 "trsvcid": "33474" 00:31:24.967 }, 00:31:24.967 "auth": { 00:31:24.967 "state": "completed", 00:31:24.967 "digest": "sha512", 00:31:24.967 "dhgroup": "ffdhe4096" 00:31:24.967 } 00:31:24.967 } 00:31:24.967 ]' 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:24.967 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:25.227 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:25.227 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:25.227 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:25.227 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:25.227 08:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:25.797 08:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:27.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:27.176 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:27.436 08:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:28.376 00:31:28.376 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:28.376 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:28.376 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:28.634 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.634 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:28.634 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.634 08:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.634 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.634 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:28.634 { 00:31:28.634 "cntlid": 127, 00:31:28.634 "qid": 0, 00:31:28.634 "state": "enabled", 00:31:28.634 "thread": "nvmf_tgt_poll_group_000", 00:31:28.634 "listen_address": { 00:31:28.634 "trtype": "TCP", 00:31:28.634 "adrfam": "IPv4", 00:31:28.634 "traddr": "10.0.0.2", 00:31:28.634 "trsvcid": "4420" 00:31:28.634 }, 00:31:28.634 "peer_address": { 00:31:28.634 "trtype": "TCP", 00:31:28.634 "adrfam": "IPv4", 00:31:28.634 "traddr": "10.0.0.1", 00:31:28.634 "trsvcid": "48522" 00:31:28.634 }, 00:31:28.634 "auth": { 00:31:28.634 "state": "completed", 00:31:28.634 "digest": "sha512", 00:31:28.634 "dhgroup": "ffdhe4096" 00:31:28.634 } 00:31:28.634 } 00:31:28.634 ]' 00:31:28.634 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:28.635 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:28.635 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:28.893 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:28.893 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:28.893 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:28.893 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:28.893 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:29.152 08:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:31:30.528 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:30.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:30.528 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:30.528 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.528 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.529 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.529 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:30.529 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:30.529 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:30.529 08:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.786 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:30.787 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:31.725 00:31:31.725 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:31.725 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:31.725 08:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:31.984 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.984 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:31.984 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.984 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:32.242 { 00:31:32.242 "cntlid": 129, 00:31:32.242 "qid": 0, 00:31:32.242 "state": "enabled", 00:31:32.242 "thread": "nvmf_tgt_poll_group_000", 00:31:32.242 "listen_address": { 00:31:32.242 "trtype": "TCP", 00:31:32.242 "adrfam": "IPv4", 00:31:32.242 "traddr": "10.0.0.2", 00:31:32.242 "trsvcid": "4420" 00:31:32.242 }, 00:31:32.242 "peer_address": { 00:31:32.242 "trtype": "TCP", 00:31:32.242 "adrfam": "IPv4", 00:31:32.242 "traddr": "10.0.0.1", 00:31:32.242 "trsvcid": "48532" 00:31:32.242 }, 00:31:32.242 "auth": { 00:31:32.242 "state": "completed", 00:31:32.242 "digest": "sha512", 00:31:32.242 "dhgroup": "ffdhe6144" 00:31:32.242 } 00:31:32.242 } 00:31:32.242 ]' 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:32.242 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:32.500 08:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:33.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:33.880 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.446 08:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:35.020 00:31:35.020 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:35.020 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:35.020 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:35.606 { 00:31:35.606 "cntlid": 131, 00:31:35.606 "qid": 0, 00:31:35.606 "state": "enabled", 00:31:35.606 "thread": "nvmf_tgt_poll_group_000", 00:31:35.606 "listen_address": { 00:31:35.606 "trtype": "TCP", 00:31:35.606 "adrfam": "IPv4", 00:31:35.606 "traddr": "10.0.0.2", 00:31:35.606 "trsvcid": "4420" 00:31:35.606 }, 00:31:35.606 "peer_address": { 00:31:35.606 "trtype": "TCP", 00:31:35.606 "adrfam": "IPv4", 00:31:35.606 "traddr": "10.0.0.1", 00:31:35.606 "trsvcid": "48572" 00:31:35.606 }, 00:31:35.606 "auth": { 00:31:35.606 "state": "completed", 00:31:35.606 "digest": "sha512", 00:31:35.606 "dhgroup": "ffdhe6144" 00:31:35.606 } 00:31:35.606 } 00:31:35.606 ]' 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:35.606 08:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:35.606 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:35.606 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:35.606 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:35.606 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:35.606 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:36.174 08:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:37.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:37.550 08:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.117 08:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.682 00:31:38.682 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:38.682 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:38.682 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:38.940 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.940 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:38.940 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.940 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:39.199 { 00:31:39.199 "cntlid": 133, 00:31:39.199 "qid": 0, 00:31:39.199 "state": "enabled", 00:31:39.199 "thread": "nvmf_tgt_poll_group_000", 00:31:39.199 "listen_address": { 00:31:39.199 "trtype": "TCP", 00:31:39.199 "adrfam": "IPv4", 00:31:39.199 "traddr": "10.0.0.2", 00:31:39.199 "trsvcid": "4420" 00:31:39.199 }, 00:31:39.199 "peer_address": { 00:31:39.199 "trtype": "TCP", 00:31:39.199 "adrfam": "IPv4", 00:31:39.199 "traddr": "10.0.0.1", 00:31:39.199 "trsvcid": "59334" 00:31:39.199 }, 00:31:39.199 "auth": { 00:31:39.199 "state": "completed", 00:31:39.199 "digest": "sha512", 00:31:39.199 "dhgroup": "ffdhe6144" 00:31:39.199 } 00:31:39.199 } 00:31:39.199 ]' 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:39.199 08:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:39.768 08:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:41.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:41.674 08:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:41.934 08:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:42.872 00:31:43.131 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:43.131 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:43.131 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:43.391 { 00:31:43.391 "cntlid": 135, 00:31:43.391 "qid": 0, 00:31:43.391 "state": "enabled", 00:31:43.391 "thread": "nvmf_tgt_poll_group_000", 00:31:43.391 "listen_address": { 00:31:43.391 "trtype": "TCP", 00:31:43.391 "adrfam": "IPv4", 00:31:43.391 "traddr": "10.0.0.2", 00:31:43.391 "trsvcid": "4420" 00:31:43.391 }, 00:31:43.391 "peer_address": { 00:31:43.391 "trtype": "TCP", 00:31:43.391 "adrfam": "IPv4", 00:31:43.391 "traddr": "10.0.0.1", 00:31:43.391 "trsvcid": "59352" 00:31:43.391 }, 00:31:43.391 "auth": { 00:31:43.391 "state": "completed", 00:31:43.391 "digest": "sha512", 00:31:43.391 "dhgroup": "ffdhe6144" 00:31:43.391 } 00:31:43.391 } 00:31:43.391 ]' 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:43.391 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:43.652 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:43.652 08:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:43.652 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:43.652 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:43.652 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:44.222 08:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:45.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:45.604 08:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:46.174 08:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.556 00:31:47.556 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:47.556 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:47.556 08:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:48.125 { 00:31:48.125 "cntlid": 137, 00:31:48.125 "qid": 0, 00:31:48.125 "state": "enabled", 00:31:48.125 "thread": "nvmf_tgt_poll_group_000", 00:31:48.125 "listen_address": { 00:31:48.125 "trtype": "TCP", 00:31:48.125 "adrfam": "IPv4", 00:31:48.125 "traddr": "10.0.0.2", 00:31:48.125 "trsvcid": "4420" 00:31:48.125 }, 00:31:48.125 "peer_address": { 00:31:48.125 "trtype": "TCP", 00:31:48.125 "adrfam": "IPv4", 00:31:48.125 "traddr": "10.0.0.1", 00:31:48.125 "trsvcid": "57118" 00:31:48.125 }, 00:31:48.125 "auth": { 00:31:48.125 "state": "completed", 00:31:48.125 "digest": "sha512", 00:31:48.125 "dhgroup": "ffdhe8192" 00:31:48.125 } 00:31:48.125 } 00:31:48.125 ]' 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:48.125 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:48.385 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:48.385 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:48.385 08:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:48.645 08:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:31:50.024 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:50.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:50.024 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:50.024 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.024 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:50.024 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.024 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:50.025 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:50.025 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.607 08:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.989 00:31:51.989 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:51.989 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:51.989 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:52.559 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.559 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:52.559 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.559 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:52.559 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.559 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:52.559 { 00:31:52.559 "cntlid": 139, 00:31:52.559 "qid": 0, 00:31:52.559 "state": "enabled", 00:31:52.559 "thread": "nvmf_tgt_poll_group_000", 00:31:52.559 "listen_address": { 00:31:52.559 "trtype": "TCP", 00:31:52.559 "adrfam": "IPv4", 00:31:52.559 "traddr": "10.0.0.2", 00:31:52.559 "trsvcid": "4420" 00:31:52.559 }, 00:31:52.559 "peer_address": { 00:31:52.559 "trtype": "TCP", 00:31:52.559 "adrfam": "IPv4", 00:31:52.559 "traddr": "10.0.0.1", 00:31:52.559 "trsvcid": "57154" 00:31:52.559 }, 00:31:52.559 "auth": { 00:31:52.559 "state": "completed", 00:31:52.559 "digest": "sha512", 00:31:52.559 "dhgroup": "ffdhe8192" 00:31:52.559 } 00:31:52.559 } 00:31:52.559 ]' 00:31:52.559 08:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:52.559 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:52.559 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:52.819 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:52.819 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:52.819 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:52.819 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:52.819 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:53.388 08:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YjcyNWRmOTEzNTE2ZjExYTExNDM1M2Y2ZmZhNjIyNzDEKFw9: --dhchap-ctrl-secret DHHC-1:02:ODFjMGYzNmI0OWFmMmQ5YmUyZDVjOGEzZWQ3YmNjM2IyMzFmYWY1YzY0MGQzNGFm3K1svA==: 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:54.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:54.769 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.337 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.596 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:55.596 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.596 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.596 08:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:56.976 00:31:56.976 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:31:56.976 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:31:56.976 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:57.236 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.236 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:57.236 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.236 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.236 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.236 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:31:57.236 { 00:31:57.236 "cntlid": 141, 00:31:57.236 "qid": 0, 00:31:57.236 "state": "enabled", 00:31:57.236 "thread": "nvmf_tgt_poll_group_000", 00:31:57.236 "listen_address": { 00:31:57.236 "trtype": "TCP", 00:31:57.236 "adrfam": "IPv4", 00:31:57.236 "traddr": "10.0.0.2", 00:31:57.236 "trsvcid": "4420" 00:31:57.236 }, 00:31:57.236 "peer_address": { 00:31:57.236 "trtype": "TCP", 00:31:57.236 "adrfam": "IPv4", 00:31:57.236 "traddr": "10.0.0.1", 00:31:57.236 "trsvcid": "50490" 00:31:57.236 }, 00:31:57.236 "auth": { 00:31:57.236 "state": "completed", 00:31:57.236 "digest": "sha512", 00:31:57.236 "dhgroup": "ffdhe8192" 00:31:57.236 } 00:31:57.236 } 00:31:57.236 ]' 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:57.495 08:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:58.064 08:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YjkxYjdhMTM0NzY0YzFhN2FiM2Q0YWNkNDdiMDM4NThkZjQ4YzU3NjEzNWE5ZmE26Qe71g==: --dhchap-ctrl-secret DHHC-1:01:ODNlYmU1ZTE1MzM0NjMzNGE0NmE1Yzk3ZTgwZTNhZTnA2NaO: 00:31:59.004 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:59.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:59.264 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:59.264 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.264 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.264 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.264 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:31:59.264 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:59.264 08:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:31:59.834 08:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:01.214 00:32:01.214 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:01.214 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:01.214 08:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:01.781 { 00:32:01.781 "cntlid": 143, 00:32:01.781 "qid": 0, 00:32:01.781 "state": "enabled", 00:32:01.781 "thread": "nvmf_tgt_poll_group_000", 00:32:01.781 "listen_address": { 00:32:01.781 "trtype": "TCP", 00:32:01.781 "adrfam": "IPv4", 00:32:01.781 "traddr": "10.0.0.2", 00:32:01.781 "trsvcid": "4420" 00:32:01.781 }, 00:32:01.781 "peer_address": { 00:32:01.781 "trtype": "TCP", 00:32:01.781 "adrfam": "IPv4", 00:32:01.781 "traddr": "10.0.0.1", 00:32:01.781 "trsvcid": "50520" 00:32:01.781 }, 00:32:01.781 "auth": { 00:32:01.781 "state": "completed", 00:32:01.781 "digest": "sha512", 00:32:01.781 "dhgroup": "ffdhe8192" 00:32:01.781 } 00:32:01.781 } 00:32:01.781 ]' 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:01.781 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:02.040 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:02.040 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:02.040 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:02.610 08:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:32:03.990 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:03.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:03.990 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:03.990 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.990 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:04.249 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.249 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:32:04.249 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:32:04.249 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:32:04.249 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:04.249 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:04.249 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.507 08:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.443 00:32:05.443 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:05.443 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:05.443 08:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:05.710 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.710 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:05.710 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.710 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:05.983 { 00:32:05.983 "cntlid": 145, 00:32:05.983 "qid": 0, 00:32:05.983 "state": "enabled", 00:32:05.983 "thread": "nvmf_tgt_poll_group_000", 00:32:05.983 "listen_address": { 00:32:05.983 "trtype": "TCP", 00:32:05.983 "adrfam": "IPv4", 00:32:05.983 "traddr": "10.0.0.2", 00:32:05.983 "trsvcid": "4420" 00:32:05.983 }, 00:32:05.983 "peer_address": { 00:32:05.983 "trtype": "TCP", 00:32:05.983 "adrfam": "IPv4", 00:32:05.983 "traddr": "10.0.0.1", 00:32:05.983 "trsvcid": "50552" 00:32:05.983 }, 00:32:05.983 "auth": { 00:32:05.983 "state": "completed", 00:32:05.983 "digest": "sha512", 00:32:05.983 "dhgroup": "ffdhe8192" 00:32:05.983 } 00:32:05.983 } 00:32:05.983 ]' 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:05.983 08:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:06.560 08:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI5NWQ2ZjQ5ZjE2YjVhZDU0ZmIxNDA1NzgwMzgxODFlYTg5NzJlODRhZDdiZWEyPZstFQ==: --dhchap-ctrl-secret DHHC-1:03:NTU5ZmVlMDRkZDNlMDcyMmM3OGQ0YmI2NzI2YmY1ZGI3ODkxNDZjNmYzYjQ2NzEwMmI0ZTFlOTY3NTllY2U1YaKVc38=: 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:08.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:08.471 08:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:32:09.411 request: 00:32:09.411 { 00:32:09.411 "name": "nvme0", 00:32:09.411 "trtype": "tcp", 00:32:09.411 "traddr": "10.0.0.2", 00:32:09.411 "adrfam": "ipv4", 00:32:09.411 "trsvcid": "4420", 00:32:09.411 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:32:09.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:32:09.411 "prchk_reftag": false, 00:32:09.411 "prchk_guard": false, 00:32:09.411 "hdgst": false, 00:32:09.411 "ddgst": false, 00:32:09.411 "dhchap_key": "key2", 00:32:09.411 "method": "bdev_nvme_attach_controller", 00:32:09.411 "req_id": 1 00:32:09.411 } 00:32:09.411 Got JSON-RPC error response 00:32:09.411 response: 00:32:09.411 { 00:32:09.411 "code": -5, 00:32:09.411 "message": "Input/output error" 00:32:09.411 } 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:09.411 08:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:10.793 request: 00:32:10.793 { 00:32:10.793 "name": "nvme0", 00:32:10.793 "trtype": "tcp", 00:32:10.793 "traddr": "10.0.0.2", 00:32:10.793 "adrfam": "ipv4", 00:32:10.793 "trsvcid": "4420", 00:32:10.793 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:32:10.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:32:10.793 "prchk_reftag": false, 00:32:10.793 "prchk_guard": false, 00:32:10.793 "hdgst": false, 00:32:10.793 "ddgst": false, 00:32:10.793 "dhchap_key": "key1", 00:32:10.793 "dhchap_ctrlr_key": "ckey2", 00:32:10.793 "method": "bdev_nvme_attach_controller", 00:32:10.793 "req_id": 1 00:32:10.793 } 00:32:10.793 Got JSON-RPC error response 00:32:10.793 response: 00:32:10.793 { 00:32:10.793 "code": -5, 00:32:10.793 "message": "Input/output error" 00:32:10.793 } 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.793 08:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.174 request: 00:32:12.174 { 00:32:12.174 "name": "nvme0", 00:32:12.174 "trtype": "tcp", 00:32:12.174 "traddr": "10.0.0.2", 00:32:12.174 "adrfam": "ipv4", 00:32:12.174 "trsvcid": "4420", 00:32:12.174 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:32:12.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:32:12.174 "prchk_reftag": false, 00:32:12.174 "prchk_guard": false, 00:32:12.174 "hdgst": false, 00:32:12.174 "ddgst": false, 00:32:12.174 "dhchap_key": "key1", 00:32:12.174 "dhchap_ctrlr_key": "ckey1", 00:32:12.174 "method": "bdev_nvme_attach_controller", 00:32:12.174 "req_id": 1 00:32:12.174 } 00:32:12.174 Got JSON-RPC error response 00:32:12.174 response: 00:32:12.174 { 00:32:12.174 "code": -5, 00:32:12.174 "message": "Input/output error" 00:32:12.174 } 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2360217 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2360217 ']' 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2360217 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:32:12.174 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:12.175 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360217 00:32:12.175 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:12.175 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:12.175 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360217' 00:32:12.175 killing process with pid 2360217 00:32:12.175 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2360217 00:32:12.175 08:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2360217 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2393844 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2393844 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2393844 ']' 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:14.083 08:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.550 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:15.550 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:32:15.550 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:15.550 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:15.550 08:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2393844 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2393844 ']' 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.550 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.491 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.491 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:32:16.491 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:32:16.491 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.491 08:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:16.751 08:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:18.131 00:32:18.131 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:18.131 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:18.131 08:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:18.699 { 00:32:18.699 "cntlid": 1, 00:32:18.699 "qid": 0, 00:32:18.699 "state": "enabled", 00:32:18.699 "thread": "nvmf_tgt_poll_group_000", 00:32:18.699 "listen_address": { 00:32:18.699 "trtype": "TCP", 00:32:18.699 "adrfam": "IPv4", 00:32:18.699 "traddr": "10.0.0.2", 00:32:18.699 "trsvcid": "4420" 00:32:18.699 }, 00:32:18.699 "peer_address": { 00:32:18.699 "trtype": "TCP", 00:32:18.699 "adrfam": "IPv4", 00:32:18.699 "traddr": "10.0.0.1", 00:32:18.699 "trsvcid": "34074" 00:32:18.699 }, 00:32:18.699 "auth": { 00:32:18.699 "state": "completed", 00:32:18.699 "digest": "sha512", 00:32:18.699 "dhgroup": "ffdhe8192" 00:32:18.699 } 00:32:18.699 } 00:32:18.699 ]' 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:18.699 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:18.959 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:18.959 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:18.959 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:19.529 08:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:NDJiOGJiMWRiZjdjMDNlZDdhZjJhOTdlMGJkYmQ3NWY4OWVhMTE0NzBhZjc3MWQ4MmMzZmM2NzVmMWMyZDRmNadceP0=: 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:20.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:32:20.909 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:32:21.478 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:21.479 08:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:22.434 request: 00:32:22.434 { 00:32:22.434 "name": "nvme0", 00:32:22.434 "trtype": "tcp", 00:32:22.434 "traddr": "10.0.0.2", 00:32:22.434 "adrfam": "ipv4", 00:32:22.434 "trsvcid": "4420", 00:32:22.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:32:22.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:32:22.434 "prchk_reftag": false, 00:32:22.434 "prchk_guard": false, 00:32:22.434 "hdgst": false, 00:32:22.434 "ddgst": false, 00:32:22.434 "dhchap_key": "key3", 00:32:22.434 "method": "bdev_nvme_attach_controller", 00:32:22.434 "req_id": 1 00:32:22.434 } 00:32:22.434 Got JSON-RPC error response 00:32:22.434 response: 00:32:22.434 { 00:32:22.434 "code": -5, 00:32:22.434 "message": "Input/output error" 00:32:22.434 } 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:32:22.434 08:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:32:22.713 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:22.713 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:32:22.713 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:22.713 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:32:22.713 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.713 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:32:22.713 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:22.714 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:22.714 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:23.653 request: 00:32:23.653 { 00:32:23.653 "name": "nvme0", 00:32:23.653 "trtype": "tcp", 00:32:23.653 "traddr": "10.0.0.2", 00:32:23.653 "adrfam": "ipv4", 00:32:23.653 "trsvcid": "4420", 00:32:23.653 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:32:23.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:32:23.653 "prchk_reftag": false, 00:32:23.653 "prchk_guard": false, 00:32:23.653 "hdgst": false, 00:32:23.653 "ddgst": false, 00:32:23.653 "dhchap_key": "key3", 00:32:23.653 "method": "bdev_nvme_attach_controller", 00:32:23.653 "req_id": 1 00:32:23.653 } 00:32:23.653 Got JSON-RPC error response 00:32:23.653 response: 00:32:23.653 { 00:32:23.653 "code": -5, 00:32:23.653 "message": "Input/output error" 00:32:23.653 } 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:23.653 08:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:32:24.223 08:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:32:24.794 request: 00:32:24.794 { 00:32:24.794 "name": "nvme0", 00:32:24.794 "trtype": "tcp", 00:32:24.794 "traddr": "10.0.0.2", 00:32:24.794 "adrfam": "ipv4", 00:32:24.794 "trsvcid": "4420", 00:32:24.794 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:32:24.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:32:24.794 "prchk_reftag": false, 00:32:24.794 "prchk_guard": false, 00:32:24.794 "hdgst": false, 00:32:24.794 "ddgst": false, 00:32:24.794 "dhchap_key": "key0", 00:32:24.794 "dhchap_ctrlr_key": "key1", 00:32:24.794 "method": "bdev_nvme_attach_controller", 00:32:24.794 "req_id": 1 00:32:24.794 } 00:32:24.794 Got JSON-RPC error response 00:32:24.794 response: 00:32:24.794 { 00:32:24.794 "code": -5, 00:32:24.794 "message": "Input/output error" 00:32:24.794 } 00:32:24.794 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:32:24.794 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:24.794 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:24.794 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:24.794 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:24.794 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:32:25.365 00:32:25.365 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:32:25.365 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:32:25.365 08:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:25.935 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.935 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:25.935 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:26.505 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:32:26.505 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:32:26.505 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2360371 00:32:26.505 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2360371 ']' 00:32:26.505 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2360371 00:32:26.505 08:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:32:26.505 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:26.505 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360371 00:32:26.764 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:26.765 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:26.765 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360371' 00:32:26.765 killing process with pid 2360371 00:32:26.765 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2360371 00:32:26.765 08:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2360371 00:32:30.059 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:32:30.059 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:30.059 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:32:30.059 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:30.059 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:32:30.059 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:30.059 08:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:30.059 rmmod nvme_tcp 00:32:30.059 rmmod nvme_fabrics 00:32:30.059 rmmod nvme_keyring 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2393844 ']' 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2393844 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2393844 ']' 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2393844 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2393844 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2393844' 00:32:30.059 killing process with pid 2393844 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2393844 00:32:30.059 08:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2393844 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.599 08:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.v0v /tmp/spdk.key-sha256.8TQ /tmp/spdk.key-sha384.Ovk /tmp/spdk.key-sha512.1E8 /tmp/spdk.key-sha512.17d /tmp/spdk.key-sha384.lyE /tmp/spdk.key-sha256.aDt '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:32:34.506 00:32:34.506 real 5m11.903s 00:32:34.506 user 12m22.580s 00:32:34.506 sys 0m39.503s 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.506 ************************************ 00:32:34.506 END TEST nvmf_auth_target 00:32:34.506 ************************************ 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:34.506 ************************************ 00:32:34.506 START TEST nvmf_bdevio_no_huge 00:32:34.506 ************************************ 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:32:34.506 * Looking for test storage... 00:32:34.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:34.506 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:32:34.507 08:45:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:37.809 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:37.809 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.809 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:37.810 Found net devices under 0000:84:00.0: cvl_0_0 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:37.810 Found net devices under 0000:84:00.1: cvl_0_1 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:37.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:32:37.810 00:32:37.810 --- 10.0.0.2 ping statistics --- 00:32:37.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.810 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:32:37.810 00:32:37.810 --- 10.0.0.1 ping statistics --- 00:32:37.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.810 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:37.810 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2397938 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2397938 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2397938 ']' 00:32:38.071 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.072 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:32:38.072 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:38.072 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.072 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:38.072 08:45:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:38.072 [2024-07-23 08:45:50.443599] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:38.072 [2024-07-23 08:45:50.443782] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:32:38.332 [2024-07-23 08:45:50.685372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:38.904 [2024-07-23 08:45:51.253841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.904 [2024-07-23 08:45:51.253943] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.904 [2024-07-23 08:45:51.253989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.904 [2024-07-23 08:45:51.254025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.904 [2024-07-23 08:45:51.254060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.904 [2024-07-23 08:45:51.254305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:32:38.904 [2024-07-23 08:45:51.254421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:32:38.904 [2024-07-23 08:45:51.254497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.904 [2024-07-23 08:45:51.254511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:39.844 [2024-07-23 08:45:52.082277] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:39.844 Malloc0 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:39.844 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:39.845 [2024-07-23 08:45:52.272007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:39.845 { 00:32:39.845 "params": { 00:32:39.845 "name": "Nvme$subsystem", 00:32:39.845 "trtype": "$TEST_TRANSPORT", 00:32:39.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.845 "adrfam": "ipv4", 00:32:39.845 "trsvcid": "$NVMF_PORT", 00:32:39.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.845 "hdgst": ${hdgst:-false}, 00:32:39.845 "ddgst": ${ddgst:-false} 00:32:39.845 }, 00:32:39.845 "method": "bdev_nvme_attach_controller" 00:32:39.845 } 00:32:39.845 EOF 00:32:39.845 )") 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:32:39.845 08:45:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:39.845 "params": { 00:32:39.845 "name": "Nvme1", 00:32:39.845 "trtype": "tcp", 00:32:39.845 "traddr": "10.0.0.2", 00:32:39.845 "adrfam": "ipv4", 00:32:39.845 "trsvcid": "4420", 00:32:39.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.845 "hdgst": false, 00:32:39.845 "ddgst": false 00:32:39.845 }, 00:32:39.845 "method": "bdev_nvme_attach_controller" 00:32:39.845 }' 00:32:40.106 [2024-07-23 08:45:52.449491] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:40.106 [2024-07-23 08:45:52.449818] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2398210 ] 00:32:40.375 [2024-07-23 08:45:52.804668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:40.987 [2024-07-23 08:45:53.310008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.987 [2024-07-23 08:45:53.310067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.987 [2024-07-23 08:45:53.310076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:41.557 I/O targets: 00:32:41.557 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:41.557 00:32:41.557 00:32:41.557 CUnit - A unit testing framework for C - Version 2.1-3 00:32:41.557 http://cunit.sourceforge.net/ 00:32:41.557 00:32:41.557 00:32:41.557 Suite: bdevio tests on: Nvme1n1 00:32:41.557 Test: blockdev write read block ...passed 00:32:41.557 Test: blockdev write zeroes read block ...passed 00:32:41.557 Test: blockdev write zeroes read no split ...passed 00:32:41.557 Test: blockdev write zeroes read split ...passed 00:32:41.816 Test: blockdev write zeroes read split partial ...passed 00:32:41.816 Test: blockdev reset ...[2024-07-23 08:45:54.083355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:41.816 [2024-07-23 08:45:54.083624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:32:41.816 [2024-07-23 08:45:54.115760] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:41.816 passed 00:32:41.816 Test: blockdev write read 8 blocks ...passed 00:32:41.816 Test: blockdev write read size > 128k ...passed 00:32:41.816 Test: blockdev write read invalid size ...passed 00:32:41.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:41.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:41.816 Test: blockdev write read max offset ...passed 00:32:41.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:41.816 Test: blockdev writev readv 8 blocks ...passed 00:32:41.816 Test: blockdev writev readv 30 x 1block ...passed 00:32:41.816 Test: blockdev writev readv block ...passed 00:32:42.076 Test: blockdev writev readv size > 128k ...passed 00:32:42.076 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:42.076 Test: blockdev comparev and writev ...[2024-07-23 08:45:54.386393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.386482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.386536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.386579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.387663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.387747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.387829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.387890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.388910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.388992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.389074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.389137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.390139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.390220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.390302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:42.076 [2024-07-23 08:45:54.390395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:42.076 passed 00:32:42.076 Test: blockdev nvme passthru rw ...passed 00:32:42.076 Test: blockdev nvme passthru vendor specific ...[2024-07-23 08:45:54.474062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:42.076 [2024-07-23 08:45:54.474172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.474694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:42.076 [2024-07-23 08:45:54.474775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.475370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:42.076 [2024-07-23 08:45:54.475429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:42.076 [2024-07-23 08:45:54.475933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:42.076 [2024-07-23 08:45:54.476009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:42.076 passed 00:32:42.076 Test: blockdev nvme admin passthru ...passed 00:32:42.076 Test: blockdev copy ...passed 00:32:42.076 00:32:42.076 Run Summary: Type Total Ran Passed Failed Inactive 00:32:42.076 suites 1 1 n/a 0 0 00:32:42.076 tests 23 23 23 0 0 00:32:42.076 asserts 152 152 152 0 n/a 00:32:42.076 00:32:42.076 Elapsed time = 1.380 seconds 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:43.456 rmmod nvme_tcp 00:32:43.456 rmmod nvme_fabrics 00:32:43.456 rmmod nvme_keyring 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2397938 ']' 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2397938 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2397938 ']' 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2397938 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2397938 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2397938' 00:32:43.456 killing process with pid 2397938 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2397938 00:32:43.456 08:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2397938 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.368 08:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:47.912 00:32:47.912 real 0m13.086s 00:32:47.912 user 0m30.542s 00:32:47.912 sys 0m4.944s 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:32:47.912 ************************************ 00:32:47.912 END TEST nvmf_bdevio_no_huge 00:32:47.912 ************************************ 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:47.912 ************************************ 00:32:47.912 START TEST nvmf_tls 00:32:47.912 ************************************ 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:32:47.912 * Looking for test storage... 00:32:47.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.912 08:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.912 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:32:47.913 08:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:51.215 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:51.215 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:51.216 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:51.216 Found net devices under 0000:84:00.0: cvl_0_0 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:51.216 Found net devices under 0000:84:00.1: cvl_0_1 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:51.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:32:51.216 00:32:51.216 --- 10.0.0.2 ping statistics --- 00:32:51.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.216 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:32:51.216 00:32:51.216 --- 10.0.0.1 ping statistics --- 00:32:51.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.216 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2400821 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2400821 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2400821 ']' 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:51.216 08:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:51.216 [2024-07-23 08:46:03.525335] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:51.216 [2024-07-23 08:46:03.525507] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.216 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.216 [2024-07-23 08:46:03.683273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.477 [2024-07-23 08:46:03.997904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.477 [2024-07-23 08:46:03.997996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.477 [2024-07-23 08:46:03.998032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.477 [2024-07-23 08:46:03.998062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.477 [2024-07-23 08:46:03.998089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.477 [2024-07-23 08:46:03.998159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:32:52.044 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:32:52.304 true 00:32:52.304 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:52.304 08:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:32:53.245 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:32:53.245 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:32:53.245 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:32:53.245 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:53.245 08:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:32:53.815 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:32:53.815 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:32:53.815 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:32:54.075 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:54.075 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:32:54.334 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:32:54.334 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:32:54.334 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:54.334 08:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:32:54.902 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:32:54.902 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:32:54.902 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:32:55.472 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:55.472 08:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:32:56.041 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:32:56.041 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:32:56.041 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:32:56.301 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:32:56.301 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:32:56.560 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:32:56.560 08:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:32:56.560 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.FCrleno4db 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.hlimMSb2Ix 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.FCrleno4db 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hlimMSb2Ix 00:32:56.821 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:32:57.435 08:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:32:58.004 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.FCrleno4db 00:32:58.005 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FCrleno4db 00:32:58.005 08:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:58.945 [2024-07-23 08:46:11.122056] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.945 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:59.515 08:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:59.775 [2024-07-23 08:46:12.076891] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:59.775 [2024-07-23 08:46:12.077288] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.775 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:33:00.341 malloc0 00:33:00.341 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:00.601 08:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FCrleno4db 00:33:01.172 [2024-07-23 08:46:13.527880] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:01.172 08:46:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FCrleno4db 00:33:01.431 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.419 Initializing NVMe Controllers 00:33:11.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:11.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:11.419 Initialization complete. Launching workers. 00:33:11.419 ======================================================== 00:33:11.419 Latency(us) 00:33:11.420 Device Information : IOPS MiB/s Average min max 00:33:11.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4303.80 16.81 14879.06 2904.19 16363.70 00:33:11.420 ======================================================== 00:33:11.420 Total : 4303.80 16.81 14879.06 2904.19 16363.70 00:33:11.420 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FCrleno4db 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FCrleno4db' 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2403114 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2403114 /var/tmp/bdevperf.sock 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2403114 ']' 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:11.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.420 08:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:11.679 [2024-07-23 08:46:24.088627] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:11.680 [2024-07-23 08:46:24.088966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403114 ] 00:33:11.939 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.939 [2024-07-23 08:46:24.319204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.198 [2024-07-23 08:46:24.632799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:13.137 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.137 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:33:13.137 08:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FCrleno4db 00:33:13.705 [2024-07-23 08:46:26.151262] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:13.705 [2024-07-23 08:46:26.151536] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:33:13.964 TLSTESTn1 00:33:13.964 08:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:33:14.224 Running I/O for 10 seconds... 00:33:24.219 00:33:24.219 Latency(us) 00:33:24.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.219 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:24.219 Verification LBA range: start 0x0 length 0x2000 00:33:24.219 TLSTESTn1 : 10.04 1941.66 7.58 0.00 0.00 65770.65 10971.21 53205.52 00:33:24.219 =================================================================================================================== 00:33:24.219 Total : 1941.66 7.58 0.00 0.00 65770.65 10971.21 53205.52 00:33:24.219 0 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2403114 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2403114 ']' 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2403114 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2403114 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2403114' 00:33:24.219 killing process with pid 2403114 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2403114 00:33:24.219 Received shutdown signal, test time was about 10.000000 seconds 00:33:24.219 00:33:24.219 Latency(us) 00:33:24.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.219 =================================================================================================================== 00:33:24.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.219 [2024-07-23 08:46:36.653413] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:33:24.219 08:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2403114 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hlimMSb2Ix 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hlimMSb2Ix 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hlimMSb2Ix 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hlimMSb2Ix' 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2404563 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2404563 /var/tmp/bdevperf.sock 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2404563 ']' 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:25.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:25.615 08:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:25.875 [2024-07-23 08:46:38.196305] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:25.875 [2024-07-23 08:46:38.196647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404563 ] 00:33:25.875 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.135 [2024-07-23 08:46:38.452289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.394 [2024-07-23 08:46:38.770773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.961 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:26.961 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:33:26.961 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hlimMSb2Ix 00:33:27.530 [2024-07-23 08:46:39.749771] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:27.530 [2024-07-23 08:46:39.750027] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:33:27.530 [2024-07-23 08:46:39.765975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:27.530 [2024-07-23 08:46:39.767005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:33:27.530 [2024-07-23 08:46:39.767962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:33:27.530 [2024-07-23 08:46:39.768965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.530 [2024-07-23 08:46:39.769013] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:33:27.530 [2024-07-23 08:46:39.769061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.530 request: 00:33:27.530 { 00:33:27.530 "name": "TLSTEST", 00:33:27.530 "trtype": "tcp", 00:33:27.530 "traddr": "10.0.0.2", 00:33:27.530 "adrfam": "ipv4", 00:33:27.530 "trsvcid": "4420", 00:33:27.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:27.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:27.530 "prchk_reftag": false, 00:33:27.530 "prchk_guard": false, 00:33:27.530 "hdgst": false, 00:33:27.530 "ddgst": false, 00:33:27.530 "psk": "/tmp/tmp.hlimMSb2Ix", 00:33:27.530 "method": "bdev_nvme_attach_controller", 00:33:27.530 "req_id": 1 00:33:27.530 } 00:33:27.530 Got JSON-RPC error response 00:33:27.530 response: 00:33:27.530 { 00:33:27.530 "code": -5, 00:33:27.530 "message": "Input/output error" 00:33:27.530 } 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2404563 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2404563 ']' 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2404563 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2404563 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2404563' 00:33:27.530 killing process with pid 2404563 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2404563 00:33:27.530 Received shutdown signal, test time was about 10.000000 seconds 00:33:27.530 00:33:27.530 Latency(us) 00:33:27.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.530 =================================================================================================================== 00:33:27.530 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:27.530 [2024-07-23 08:46:39.840783] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:33:27.530 08:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2404563 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FCrleno4db 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FCrleno4db 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FCrleno4db 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FCrleno4db' 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2404963 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:33:28.911 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:28.912 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2404963 /var/tmp/bdevperf.sock 00:33:28.912 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2404963 ']' 00:33:28.912 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.912 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:28.912 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.912 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:28.912 08:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:28.912 [2024-07-23 08:46:41.274898] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:28.912 [2024-07-23 08:46:41.275085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404963 ] 00:33:28.912 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.171 [2024-07-23 08:46:41.453211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.431 [2024-07-23 08:46:41.766601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.000 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:30.000 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:33:30.000 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.FCrleno4db 00:33:30.260 [2024-07-23 08:46:42.558228] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:30.260 [2024-07-23 08:46:42.558472] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:33:30.260 [2024-07-23 08:46:42.573010] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:33:30.260 [2024-07-23 08:46:42.573070] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:33:30.260 [2024-07-23 08:46:42.573153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:30.260 [2024-07-23 08:46:42.573955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:33:30.260 [2024-07-23 08:46:42.574919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:33:30.260 [2024-07-23 08:46:42.575906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.260 [2024-07-23 08:46:42.575962] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:33:30.260 [2024-07-23 08:46:42.575996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.260 request: 00:33:30.260 { 00:33:30.260 "name": "TLSTEST", 00:33:30.260 "trtype": "tcp", 00:33:30.260 "traddr": "10.0.0.2", 00:33:30.260 "adrfam": "ipv4", 00:33:30.260 "trsvcid": "4420", 00:33:30.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.260 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:30.260 "prchk_reftag": false, 00:33:30.260 "prchk_guard": false, 00:33:30.260 "hdgst": false, 00:33:30.260 "ddgst": false, 00:33:30.260 "psk": "/tmp/tmp.FCrleno4db", 00:33:30.260 "method": "bdev_nvme_attach_controller", 00:33:30.260 "req_id": 1 00:33:30.260 } 00:33:30.260 Got JSON-RPC error response 00:33:30.260 response: 00:33:30.260 { 00:33:30.260 "code": -5, 00:33:30.260 "message": "Input/output error" 00:33:30.260 } 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2404963 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2404963 ']' 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2404963 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2404963 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2404963' 00:33:30.260 killing process with pid 2404963 00:33:30.260 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2404963 00:33:30.260 Received shutdown signal, test time was about 10.000000 seconds 00:33:30.260 00:33:30.260 Latency(us) 00:33:30.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.261 =================================================================================================================== 00:33:30.261 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:30.261 [2024-07-23 08:46:42.650525] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:33:30.261 08:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2404963 00:33:31.643 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:33:31.643 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:33:31.643 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.643 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.643 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.643 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FCrleno4db 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FCrleno4db 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FCrleno4db 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FCrleno4db' 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2405238 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2405238 /var/tmp/bdevperf.sock 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2405238 ']' 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:31.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:31.644 08:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:31.644 [2024-07-23 08:46:44.155875] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:31.644 [2024-07-23 08:46:44.156200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405238 ] 00:33:31.904 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.904 [2024-07-23 08:46:44.415032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.472 [2024-07-23 08:46:44.729424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:32.731 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:32.731 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:33:32.731 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FCrleno4db 00:33:33.302 [2024-07-23 08:46:45.788813] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:33.302 [2024-07-23 08:46:45.789075] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:33:33.302 [2024-07-23 08:46:45.802076] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:33:33.302 [2024-07-23 08:46:45.802134] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:33:33.302 [2024-07-23 08:46:45.802214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:33.302 [2024-07-23 08:46:45.802924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (107): Transport endpoint is not connected 00:33:33.302 [2024-07-23 08:46:45.803887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:33:33.302 [2024-07-23 08:46:45.804873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:33:33.302 [2024-07-23 08:46:45.804922] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:33:33.302 [2024-07-23 08:46:45.804957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:33:33.302 request: 00:33:33.302 { 00:33:33.302 "name": "TLSTEST", 00:33:33.302 "trtype": "tcp", 00:33:33.302 "traddr": "10.0.0.2", 00:33:33.302 "adrfam": "ipv4", 00:33:33.302 "trsvcid": "4420", 00:33:33.302 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:33.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:33.302 "prchk_reftag": false, 00:33:33.302 "prchk_guard": false, 00:33:33.302 "hdgst": false, 00:33:33.302 "ddgst": false, 00:33:33.302 "psk": "/tmp/tmp.FCrleno4db", 00:33:33.302 "method": "bdev_nvme_attach_controller", 00:33:33.302 "req_id": 1 00:33:33.302 } 00:33:33.302 Got JSON-RPC error response 00:33:33.302 response: 00:33:33.302 { 00:33:33.302 "code": -5, 00:33:33.302 "message": "Input/output error" 00:33:33.302 } 00:33:33.561 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2405238 00:33:33.561 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2405238 ']' 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2405238 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2405238 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2405238' 00:33:33.562 killing process with pid 2405238 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2405238 00:33:33.562 Received shutdown signal, test time was about 10.000000 seconds 00:33:33.562 00:33:33.562 Latency(us) 00:33:33.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.562 =================================================================================================================== 00:33:33.562 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:33.562 [2024-07-23 08:46:45.876387] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:33:33.562 08:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2405238 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2405590 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2405590 /var/tmp/bdevperf.sock 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2405590 ']' 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:34.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:34.943 08:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:34.944 [2024-07-23 08:46:47.358966] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:34.944 [2024-07-23 08:46:47.359284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405590 ] 00:33:35.203 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.203 [2024-07-23 08:46:47.620444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.463 [2024-07-23 08:46:47.934197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.404 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.404 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:33:36.404 08:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:33:36.974 [2024-07-23 08:46:49.230999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:36.974 [2024-07-23 08:46:49.232937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:33:36.974 [2024-07-23 08:46:49.233917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:36.974 [2024-07-23 08:46:49.233975] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:33:36.974 [2024-07-23 08:46:49.234007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:36.974 request: 00:33:36.974 { 00:33:36.974 "name": "TLSTEST", 00:33:36.974 "trtype": "tcp", 00:33:36.974 "traddr": "10.0.0.2", 00:33:36.974 "adrfam": "ipv4", 00:33:36.974 "trsvcid": "4420", 00:33:36.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.974 "prchk_reftag": false, 00:33:36.974 "prchk_guard": false, 00:33:36.974 "hdgst": false, 00:33:36.974 "ddgst": false, 00:33:36.974 "method": "bdev_nvme_attach_controller", 00:33:36.974 "req_id": 1 00:33:36.974 } 00:33:36.974 Got JSON-RPC error response 00:33:36.974 response: 00:33:36.974 { 00:33:36.974 "code": -5, 00:33:36.974 "message": "Input/output error" 00:33:36.974 } 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2405590 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2405590 ']' 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2405590 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2405590 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2405590' 00:33:36.974 killing process with pid 2405590 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2405590 00:33:36.974 Received shutdown signal, test time was about 10.000000 seconds 00:33:36.974 00:33:36.974 Latency(us) 00:33:36.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.974 =================================================================================================================== 00:33:36.974 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:36.974 08:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2405590 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2400821 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2400821 ']' 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2400821 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2400821 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2400821' 00:33:38.353 killing process with pid 2400821 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2400821 00:33:38.353 [2024-07-23 08:46:50.701171] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:38.353 08:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2400821 00:33:40.274 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:33:40.274 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:33:40.274 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:33:40.274 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:40.274 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:33:40.274 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:33:40.274 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.x8ms3XD6vM 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.x8ms3XD6vM 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2406187 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2406187 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2406187 ']' 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:40.275 08:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:40.535 [2024-07-23 08:46:52.845393] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:40.535 [2024-07-23 08:46:52.845714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.535 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.794 [2024-07-23 08:46:53.122658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.053 [2024-07-23 08:46:53.438753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.053 [2024-07-23 08:46:53.438837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.053 [2024-07-23 08:46:53.438873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.053 [2024-07-23 08:46:53.438904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.053 [2024-07-23 08:46:53.438932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.053 [2024-07-23 08:46:53.439000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.992 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:41.992 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:33:41.992 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:41.992 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:41.992 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:41.993 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.993 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.x8ms3XD6vM 00:33:41.993 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x8ms3XD6vM 00:33:41.993 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:33:42.561 [2024-07-23 08:46:54.916744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.561 08:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:33:43.130 08:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:33:43.699 [2024-07-23 08:46:56.124330] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:43.699 [2024-07-23 08:46:56.124736] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.699 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:33:44.636 malloc0 00:33:44.636 08:46:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:45.201 08:46:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x8ms3XD6vM 00:33:45.767 [2024-07-23 08:46:58.048978] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x8ms3XD6vM 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x8ms3XD6vM' 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2406833 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2406833 /var/tmp/bdevperf.sock 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2406833 ']' 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:45.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.767 08:46:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:33:45.767 [2024-07-23 08:46:58.164478] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:45.767 [2024-07-23 08:46:58.164651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406833 ] 00:33:45.767 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.025 [2024-07-23 08:46:58.325394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.283 [2024-07-23 08:46:58.638926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.216 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.216 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:33:47.216 08:46:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x8ms3XD6vM 00:33:47.781 [2024-07-23 08:47:00.162947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:47.781 [2024-07-23 08:47:00.163220] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:33:47.781 TLSTESTn1 00:33:47.781 08:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:33:48.039 Running I/O for 10 seconds... 00:34:00.240 00:34:00.240 Latency(us) 00:34:00.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.240 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:00.240 Verification LBA range: start 0x0 length 0x2000 00:34:00.240 TLSTESTn1 : 10.05 1959.06 7.65 0.00 0.00 65147.35 11505.21 56312.41 00:34:00.240 =================================================================================================================== 00:34:00.240 Total : 1959.06 7.65 0.00 0.00 65147.35 11505.21 56312.41 00:34:00.240 0 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2406833 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2406833 ']' 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2406833 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2406833 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2406833' 00:34:00.240 killing process with pid 2406833 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2406833 00:34:00.240 Received shutdown signal, test time was about 10.000000 seconds 00:34:00.240 00:34:00.240 Latency(us) 00:34:00.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.240 =================================================================================================================== 00:34:00.240 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.240 [2024-07-23 08:47:10.646540] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:00.240 08:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2406833 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.x8ms3XD6vM 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x8ms3XD6vM 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x8ms3XD6vM 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x8ms3XD6vM 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:00.240 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x8ms3XD6vM' 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2408319 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2408319 /var/tmp/bdevperf.sock 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2408319 ']' 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:00.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:00.241 08:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:00.241 [2024-07-23 08:47:12.169144] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:00.241 [2024-07-23 08:47:12.169506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408319 ] 00:34:00.241 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.241 [2024-07-23 08:47:12.419264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.241 [2024-07-23 08:47:12.731808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.180 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:01.180 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:01.180 08:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x8ms3XD6vM 00:34:01.745 [2024-07-23 08:47:14.024980] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:01.745 [2024-07-23 08:47:14.025089] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:34:01.745 [2024-07-23 08:47:14.025121] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.x8ms3XD6vM 00:34:01.745 request: 00:34:01.745 { 00:34:01.745 "name": "TLSTEST", 00:34:01.745 "trtype": "tcp", 00:34:01.745 "traddr": "10.0.0.2", 00:34:01.745 "adrfam": "ipv4", 00:34:01.745 "trsvcid": "4420", 00:34:01.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:01.745 "prchk_reftag": false, 00:34:01.745 "prchk_guard": false, 00:34:01.745 "hdgst": false, 00:34:01.745 "ddgst": false, 00:34:01.745 "psk": "/tmp/tmp.x8ms3XD6vM", 00:34:01.745 "method": "bdev_nvme_attach_controller", 00:34:01.745 "req_id": 1 00:34:01.745 } 00:34:01.745 Got JSON-RPC error response 00:34:01.745 response: 00:34:01.745 { 00:34:01.745 "code": -1, 00:34:01.745 "message": "Operation not permitted" 00:34:01.745 } 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2408319 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2408319 ']' 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2408319 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2408319 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2408319' 00:34:01.745 killing process with pid 2408319 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2408319 00:34:01.745 Received shutdown signal, test time was about 10.000000 seconds 00:34:01.745 00:34:01.745 Latency(us) 00:34:01.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.745 =================================================================================================================== 00:34:01.745 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:01.745 08:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2408319 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2406187 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2406187 ']' 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2406187 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2406187 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2406187' 00:34:03.163 killing process with pid 2406187 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2406187 00:34:03.163 [2024-07-23 08:47:15.441621] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:03.163 08:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2406187 00:34:05.070 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2408982 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2408982 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2408982 ']' 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:05.071 08:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:05.071 [2024-07-23 08:47:17.441914] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:05.071 [2024-07-23 08:47:17.442256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.330 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.330 [2024-07-23 08:47:17.732033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.589 [2024-07-23 08:47:18.047612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.589 [2024-07-23 08:47:18.047704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.589 [2024-07-23 08:47:18.047740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.589 [2024-07-23 08:47:18.047781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.589 [2024-07-23 08:47:18.047810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.589 [2024-07-23 08:47:18.047879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.x8ms3XD6vM 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.x8ms3XD6vM 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.x8ms3XD6vM 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x8ms3XD6vM 00:34:06.155 08:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:06.720 [2024-07-23 08:47:18.991544] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.720 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:06.979 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:34:07.238 [2024-07-23 08:47:19.705657] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:07.238 [2024-07-23 08:47:19.706057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.238 08:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:34:08.176 malloc0 00:34:08.176 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:08.435 08:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x8ms3XD6vM 00:34:09.002 [2024-07-23 08:47:21.229933] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:34:09.002 [2024-07-23 08:47:21.230017] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:34:09.002 [2024-07-23 08:47:21.230080] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:34:09.002 request: 00:34:09.002 { 00:34:09.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.002 "host": "nqn.2016-06.io.spdk:host1", 00:34:09.002 "psk": "/tmp/tmp.x8ms3XD6vM", 00:34:09.002 "method": "nvmf_subsystem_add_host", 00:34:09.002 "req_id": 1 00:34:09.002 } 00:34:09.002 Got JSON-RPC error response 00:34:09.002 response: 00:34:09.002 { 00:34:09.002 "code": -32603, 00:34:09.002 "message": "Internal error" 00:34:09.002 } 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2408982 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2408982 ']' 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2408982 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2408982 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2408982' 00:34:09.002 killing process with pid 2408982 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2408982 00:34:09.002 08:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2408982 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.x8ms3XD6vM 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2409610 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2409610 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2409610 ']' 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:10.907 08:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:10.907 [2024-07-23 08:47:23.288003] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:10.907 [2024-07-23 08:47:23.288362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.166 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.166 [2024-07-23 08:47:23.571224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.425 [2024-07-23 08:47:23.895165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.425 [2024-07-23 08:47:23.895251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.425 [2024-07-23 08:47:23.895285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.425 [2024-07-23 08:47:23.895336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.425 [2024-07-23 08:47:23.895366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.425 [2024-07-23 08:47:23.895435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.x8ms3XD6vM 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x8ms3XD6vM 00:34:12.362 08:47:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:12.928 [2024-07-23 08:47:25.148970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.928 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:13.497 08:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:34:14.067 [2024-07-23 08:47:26.428711] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:14.067 [2024-07-23 08:47:26.429092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.067 08:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:34:14.636 malloc0 00:34:14.636 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:15.200 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x8ms3XD6vM 00:34:15.458 [2024-07-23 08:47:27.724589] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2410117 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2410117 /var/tmp/bdevperf.sock 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2410117 ']' 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:15.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.458 08:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:15.458 [2024-07-23 08:47:27.834536] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:15.458 [2024-07-23 08:47:27.834709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410117 ] 00:34:15.458 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.717 [2024-07-23 08:47:27.993485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.977 [2024-07-23 08:47:28.307297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.919 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:16.919 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:16.919 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x8ms3XD6vM 00:34:17.193 [2024-07-23 08:47:29.592026] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:17.193 [2024-07-23 08:47:29.592296] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:17.193 TLSTESTn1 00:34:17.193 08:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:34:17.771 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:34:17.771 "subsystems": [ 00:34:17.771 { 00:34:17.771 "subsystem": "keyring", 00:34:17.771 "config": [] 00:34:17.771 }, 00:34:17.771 { 00:34:17.771 "subsystem": "iobuf", 00:34:17.771 "config": [ 00:34:17.771 { 00:34:17.771 "method": "iobuf_set_options", 00:34:17.771 "params": { 00:34:17.771 "small_pool_count": 8192, 00:34:17.771 "large_pool_count": 1024, 00:34:17.771 "small_bufsize": 8192, 00:34:17.771 "large_bufsize": 135168 00:34:17.771 } 00:34:17.771 } 00:34:17.771 ] 00:34:17.771 }, 00:34:17.771 { 00:34:17.771 "subsystem": "sock", 00:34:17.771 "config": [ 00:34:17.771 { 00:34:17.771 "method": "sock_set_default_impl", 00:34:17.771 "params": { 00:34:17.771 "impl_name": "posix" 00:34:17.771 } 00:34:17.771 }, 00:34:17.771 { 00:34:17.771 "method": "sock_impl_set_options", 00:34:17.771 "params": { 00:34:17.771 "impl_name": "ssl", 00:34:17.771 "recv_buf_size": 4096, 00:34:17.771 "send_buf_size": 4096, 00:34:17.771 "enable_recv_pipe": true, 00:34:17.771 "enable_quickack": false, 00:34:17.771 "enable_placement_id": 0, 00:34:17.771 "enable_zerocopy_send_server": true, 00:34:17.771 "enable_zerocopy_send_client": false, 00:34:17.771 "zerocopy_threshold": 0, 00:34:17.771 "tls_version": 0, 00:34:17.771 "enable_ktls": false 00:34:17.771 } 00:34:17.771 }, 00:34:17.771 { 00:34:17.771 "method": "sock_impl_set_options", 00:34:17.771 "params": { 00:34:17.771 "impl_name": "posix", 00:34:17.771 "recv_buf_size": 2097152, 00:34:17.771 "send_buf_size": 2097152, 00:34:17.771 "enable_recv_pipe": true, 00:34:17.771 "enable_quickack": false, 00:34:17.771 "enable_placement_id": 0, 00:34:17.771 "enable_zerocopy_send_server": true, 00:34:17.771 "enable_zerocopy_send_client": false, 00:34:17.771 "zerocopy_threshold": 0, 00:34:17.772 "tls_version": 0, 00:34:17.772 "enable_ktls": false 00:34:17.772 } 00:34:17.772 } 00:34:17.772 ] 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "subsystem": "vmd", 00:34:17.772 "config": [] 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "subsystem": "accel", 00:34:17.772 "config": [ 00:34:17.772 { 00:34:17.772 "method": "accel_set_options", 00:34:17.772 "params": { 00:34:17.772 "small_cache_size": 128, 00:34:17.772 "large_cache_size": 16, 00:34:17.772 "task_count": 2048, 00:34:17.772 "sequence_count": 2048, 00:34:17.772 "buf_count": 2048 00:34:17.772 } 00:34:17.772 } 00:34:17.772 ] 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "subsystem": "bdev", 00:34:17.772 "config": [ 00:34:17.772 { 00:34:17.772 "method": "bdev_set_options", 00:34:17.772 "params": { 00:34:17.772 "bdev_io_pool_size": 65535, 00:34:17.772 "bdev_io_cache_size": 256, 00:34:17.772 "bdev_auto_examine": true, 00:34:17.772 "iobuf_small_cache_size": 128, 00:34:17.772 "iobuf_large_cache_size": 16 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "bdev_raid_set_options", 00:34:17.772 "params": { 00:34:17.772 "process_window_size_kb": 1024, 00:34:17.772 "process_max_bandwidth_mb_sec": 0 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "bdev_iscsi_set_options", 00:34:17.772 "params": { 00:34:17.772 "timeout_sec": 30 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "bdev_nvme_set_options", 00:34:17.772 "params": { 00:34:17.772 "action_on_timeout": "none", 00:34:17.772 "timeout_us": 0, 00:34:17.772 "timeout_admin_us": 0, 00:34:17.772 "keep_alive_timeout_ms": 10000, 00:34:17.772 "arbitration_burst": 0, 00:34:17.772 "low_priority_weight": 0, 00:34:17.772 "medium_priority_weight": 0, 00:34:17.772 "high_priority_weight": 0, 00:34:17.772 "nvme_adminq_poll_period_us": 10000, 00:34:17.772 "nvme_ioq_poll_period_us": 0, 00:34:17.772 "io_queue_requests": 0, 00:34:17.772 "delay_cmd_submit": true, 00:34:17.772 "transport_retry_count": 4, 00:34:17.772 "bdev_retry_count": 3, 00:34:17.772 "transport_ack_timeout": 0, 00:34:17.772 "ctrlr_loss_timeout_sec": 0, 00:34:17.772 "reconnect_delay_sec": 0, 00:34:17.772 "fast_io_fail_timeout_sec": 0, 00:34:17.772 "disable_auto_failback": false, 00:34:17.772 "generate_uuids": false, 00:34:17.772 "transport_tos": 0, 00:34:17.772 "nvme_error_stat": false, 00:34:17.772 "rdma_srq_size": 0, 00:34:17.772 "io_path_stat": false, 00:34:17.772 "allow_accel_sequence": false, 00:34:17.772 "rdma_max_cq_size": 0, 00:34:17.772 "rdma_cm_event_timeout_ms": 0, 00:34:17.772 "dhchap_digests": [ 00:34:17.772 "sha256", 00:34:17.772 "sha384", 00:34:17.772 "sha512" 00:34:17.772 ], 00:34:17.772 "dhchap_dhgroups": [ 00:34:17.772 "null", 00:34:17.772 "ffdhe2048", 00:34:17.772 "ffdhe3072", 00:34:17.772 "ffdhe4096", 00:34:17.772 "ffdhe6144", 00:34:17.772 "ffdhe8192" 00:34:17.772 ] 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "bdev_nvme_set_hotplug", 00:34:17.772 "params": { 00:34:17.772 "period_us": 100000, 00:34:17.772 "enable": false 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "bdev_malloc_create", 00:34:17.772 "params": { 00:34:17.772 "name": "malloc0", 00:34:17.772 "num_blocks": 8192, 00:34:17.772 "block_size": 4096, 00:34:17.772 "physical_block_size": 4096, 00:34:17.772 "uuid": "f70ff43d-d76d-4bc7-ad5e-8fe79b9c1261", 00:34:17.772 "optimal_io_boundary": 0, 00:34:17.772 "md_size": 0, 00:34:17.772 "dif_type": 0, 00:34:17.772 "dif_is_head_of_md": false, 00:34:17.772 "dif_pi_format": 0 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "bdev_wait_for_examine" 00:34:17.772 } 00:34:17.772 ] 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "subsystem": "nbd", 00:34:17.772 "config": [] 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "subsystem": "scheduler", 00:34:17.772 "config": [ 00:34:17.772 { 00:34:17.772 "method": "framework_set_scheduler", 00:34:17.772 "params": { 00:34:17.772 "name": "static" 00:34:17.772 } 00:34:17.772 } 00:34:17.772 ] 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "subsystem": "nvmf", 00:34:17.772 "config": [ 00:34:17.772 { 00:34:17.772 "method": "nvmf_set_config", 00:34:17.772 "params": { 00:34:17.772 "discovery_filter": "match_any", 00:34:17.772 "admin_cmd_passthru": { 00:34:17.772 "identify_ctrlr": false 00:34:17.772 } 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "nvmf_set_max_subsystems", 00:34:17.772 "params": { 00:34:17.772 "max_subsystems": 1024 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "nvmf_set_crdt", 00:34:17.772 "params": { 00:34:17.772 "crdt1": 0, 00:34:17.772 "crdt2": 0, 00:34:17.772 "crdt3": 0 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "nvmf_create_transport", 00:34:17.772 "params": { 00:34:17.772 "trtype": "TCP", 00:34:17.772 "max_queue_depth": 128, 00:34:17.772 "max_io_qpairs_per_ctrlr": 127, 00:34:17.772 "in_capsule_data_size": 4096, 00:34:17.772 "max_io_size": 131072, 00:34:17.772 "io_unit_size": 131072, 00:34:17.772 "max_aq_depth": 128, 00:34:17.772 "num_shared_buffers": 511, 00:34:17.772 "buf_cache_size": 4294967295, 00:34:17.772 "dif_insert_or_strip": false, 00:34:17.772 "zcopy": false, 00:34:17.772 "c2h_success": false, 00:34:17.772 "sock_priority": 0, 00:34:17.772 "abort_timeout_sec": 1, 00:34:17.772 "ack_timeout": 0, 00:34:17.772 "data_wr_pool_size": 0 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "nvmf_create_subsystem", 00:34:17.772 "params": { 00:34:17.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.772 "allow_any_host": false, 00:34:17.772 "serial_number": "SPDK00000000000001", 00:34:17.772 "model_number": "SPDK bdev Controller", 00:34:17.772 "max_namespaces": 10, 00:34:17.772 "min_cntlid": 1, 00:34:17.772 "max_cntlid": 65519, 00:34:17.772 "ana_reporting": false 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "nvmf_subsystem_add_host", 00:34:17.772 "params": { 00:34:17.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.772 "host": "nqn.2016-06.io.spdk:host1", 00:34:17.772 "psk": "/tmp/tmp.x8ms3XD6vM" 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "nvmf_subsystem_add_ns", 00:34:17.772 "params": { 00:34:17.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.772 "namespace": { 00:34:17.772 "nsid": 1, 00:34:17.772 "bdev_name": "malloc0", 00:34:17.772 "nguid": "F70FF43DD76D4BC7AD5E8FE79B9C1261", 00:34:17.772 "uuid": "f70ff43d-d76d-4bc7-ad5e-8fe79b9c1261", 00:34:17.772 "no_auto_visible": false 00:34:17.772 } 00:34:17.772 } 00:34:17.772 }, 00:34:17.772 { 00:34:17.772 "method": "nvmf_subsystem_add_listener", 00:34:17.772 "params": { 00:34:17.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.772 "listen_address": { 00:34:17.772 "trtype": "TCP", 00:34:17.772 "adrfam": "IPv4", 00:34:17.772 "traddr": "10.0.0.2", 00:34:17.772 "trsvcid": "4420" 00:34:17.772 }, 00:34:17.772 "secure_channel": true 00:34:17.772 } 00:34:17.772 } 00:34:17.772 ] 00:34:17.772 } 00:34:17.772 ] 00:34:17.772 }' 00:34:17.773 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:34:18.343 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:34:18.343 "subsystems": [ 00:34:18.343 { 00:34:18.343 "subsystem": "keyring", 00:34:18.343 "config": [] 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "subsystem": "iobuf", 00:34:18.343 "config": [ 00:34:18.343 { 00:34:18.343 "method": "iobuf_set_options", 00:34:18.343 "params": { 00:34:18.343 "small_pool_count": 8192, 00:34:18.343 "large_pool_count": 1024, 00:34:18.343 "small_bufsize": 8192, 00:34:18.343 "large_bufsize": 135168 00:34:18.343 } 00:34:18.343 } 00:34:18.343 ] 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "subsystem": "sock", 00:34:18.343 "config": [ 00:34:18.343 { 00:34:18.343 "method": "sock_set_default_impl", 00:34:18.343 "params": { 00:34:18.343 "impl_name": "posix" 00:34:18.343 } 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "method": "sock_impl_set_options", 00:34:18.343 "params": { 00:34:18.343 "impl_name": "ssl", 00:34:18.343 "recv_buf_size": 4096, 00:34:18.343 "send_buf_size": 4096, 00:34:18.343 "enable_recv_pipe": true, 00:34:18.343 "enable_quickack": false, 00:34:18.343 "enable_placement_id": 0, 00:34:18.343 "enable_zerocopy_send_server": true, 00:34:18.343 "enable_zerocopy_send_client": false, 00:34:18.343 "zerocopy_threshold": 0, 00:34:18.343 "tls_version": 0, 00:34:18.343 "enable_ktls": false 00:34:18.343 } 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "method": "sock_impl_set_options", 00:34:18.343 "params": { 00:34:18.343 "impl_name": "posix", 00:34:18.343 "recv_buf_size": 2097152, 00:34:18.343 "send_buf_size": 2097152, 00:34:18.343 "enable_recv_pipe": true, 00:34:18.343 "enable_quickack": false, 00:34:18.343 "enable_placement_id": 0, 00:34:18.343 "enable_zerocopy_send_server": true, 00:34:18.343 "enable_zerocopy_send_client": false, 00:34:18.343 "zerocopy_threshold": 0, 00:34:18.343 "tls_version": 0, 00:34:18.343 "enable_ktls": false 00:34:18.343 } 00:34:18.343 } 00:34:18.343 ] 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "subsystem": "vmd", 00:34:18.343 "config": [] 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "subsystem": "accel", 00:34:18.343 "config": [ 00:34:18.343 { 00:34:18.343 "method": "accel_set_options", 00:34:18.343 "params": { 00:34:18.343 "small_cache_size": 128, 00:34:18.343 "large_cache_size": 16, 00:34:18.343 "task_count": 2048, 00:34:18.343 "sequence_count": 2048, 00:34:18.343 "buf_count": 2048 00:34:18.343 } 00:34:18.343 } 00:34:18.343 ] 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "subsystem": "bdev", 00:34:18.343 "config": [ 00:34:18.343 { 00:34:18.343 "method": "bdev_set_options", 00:34:18.343 "params": { 00:34:18.343 "bdev_io_pool_size": 65535, 00:34:18.343 "bdev_io_cache_size": 256, 00:34:18.343 "bdev_auto_examine": true, 00:34:18.343 "iobuf_small_cache_size": 128, 00:34:18.343 "iobuf_large_cache_size": 16 00:34:18.343 } 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "method": "bdev_raid_set_options", 00:34:18.343 "params": { 00:34:18.343 "process_window_size_kb": 1024, 00:34:18.343 "process_max_bandwidth_mb_sec": 0 00:34:18.343 } 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "method": "bdev_iscsi_set_options", 00:34:18.343 "params": { 00:34:18.343 "timeout_sec": 30 00:34:18.343 } 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "method": "bdev_nvme_set_options", 00:34:18.343 "params": { 00:34:18.343 "action_on_timeout": "none", 00:34:18.343 "timeout_us": 0, 00:34:18.343 "timeout_admin_us": 0, 00:34:18.343 "keep_alive_timeout_ms": 10000, 00:34:18.343 "arbitration_burst": 0, 00:34:18.343 "low_priority_weight": 0, 00:34:18.343 "medium_priority_weight": 0, 00:34:18.343 "high_priority_weight": 0, 00:34:18.343 "nvme_adminq_poll_period_us": 10000, 00:34:18.343 "nvme_ioq_poll_period_us": 0, 00:34:18.343 "io_queue_requests": 512, 00:34:18.343 "delay_cmd_submit": true, 00:34:18.343 "transport_retry_count": 4, 00:34:18.343 "bdev_retry_count": 3, 00:34:18.343 "transport_ack_timeout": 0, 00:34:18.343 "ctrlr_loss_timeout_sec": 0, 00:34:18.343 "reconnect_delay_sec": 0, 00:34:18.343 "fast_io_fail_timeout_sec": 0, 00:34:18.343 "disable_auto_failback": false, 00:34:18.343 "generate_uuids": false, 00:34:18.343 "transport_tos": 0, 00:34:18.343 "nvme_error_stat": false, 00:34:18.343 "rdma_srq_size": 0, 00:34:18.343 "io_path_stat": false, 00:34:18.343 "allow_accel_sequence": false, 00:34:18.343 "rdma_max_cq_size": 0, 00:34:18.343 "rdma_cm_event_timeout_ms": 0, 00:34:18.343 "dhchap_digests": [ 00:34:18.343 "sha256", 00:34:18.343 "sha384", 00:34:18.343 "sha512" 00:34:18.343 ], 00:34:18.343 "dhchap_dhgroups": [ 00:34:18.343 "null", 00:34:18.343 "ffdhe2048", 00:34:18.343 "ffdhe3072", 00:34:18.343 "ffdhe4096", 00:34:18.343 "ffdhe6144", 00:34:18.343 "ffdhe8192" 00:34:18.343 ] 00:34:18.343 } 00:34:18.343 }, 00:34:18.343 { 00:34:18.343 "method": "bdev_nvme_attach_controller", 00:34:18.343 "params": { 00:34:18.343 "name": "TLSTEST", 00:34:18.343 "trtype": "TCP", 00:34:18.343 "adrfam": "IPv4", 00:34:18.343 "traddr": "10.0.0.2", 00:34:18.343 "trsvcid": "4420", 00:34:18.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.344 "prchk_reftag": false, 00:34:18.344 "prchk_guard": false, 00:34:18.344 "ctrlr_loss_timeout_sec": 0, 00:34:18.344 "reconnect_delay_sec": 0, 00:34:18.344 "fast_io_fail_timeout_sec": 0, 00:34:18.344 "psk": "/tmp/tmp.x8ms3XD6vM", 00:34:18.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.344 "hdgst": false, 00:34:18.344 "ddgst": false 00:34:18.344 } 00:34:18.344 }, 00:34:18.344 { 00:34:18.344 "method": "bdev_nvme_set_hotplug", 00:34:18.344 "params": { 00:34:18.344 "period_us": 100000, 00:34:18.344 "enable": false 00:34:18.344 } 00:34:18.344 }, 00:34:18.344 { 00:34:18.344 "method": "bdev_wait_for_examine" 00:34:18.344 } 00:34:18.344 ] 00:34:18.344 }, 00:34:18.344 { 00:34:18.344 "subsystem": "nbd", 00:34:18.344 "config": [] 00:34:18.344 } 00:34:18.344 ] 00:34:18.344 }' 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2410117 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2410117 ']' 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2410117 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410117 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410117' 00:34:18.344 killing process with pid 2410117 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2410117 00:34:18.344 Received shutdown signal, test time was about 10.000000 seconds 00:34:18.344 00:34:18.344 Latency(us) 00:34:18.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.344 =================================================================================================================== 00:34:18.344 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:18.344 [2024-07-23 08:47:30.831417] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:18.344 08:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2410117 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2409610 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2409610 ']' 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2409610 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2409610 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2409610' 00:34:19.723 killing process with pid 2409610 00:34:19.723 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2409610 00:34:19.724 [2024-07-23 08:47:32.197122] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:19.724 08:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2409610 00:34:21.630 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:34:21.630 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:21.630 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:34:21.630 "subsystems": [ 00:34:21.630 { 00:34:21.630 "subsystem": "keyring", 00:34:21.630 "config": [] 00:34:21.630 }, 00:34:21.630 { 00:34:21.630 "subsystem": "iobuf", 00:34:21.630 "config": [ 00:34:21.630 { 00:34:21.630 "method": "iobuf_set_options", 00:34:21.630 "params": { 00:34:21.630 "small_pool_count": 8192, 00:34:21.630 "large_pool_count": 1024, 00:34:21.630 "small_bufsize": 8192, 00:34:21.630 "large_bufsize": 135168 00:34:21.630 } 00:34:21.630 } 00:34:21.630 ] 00:34:21.630 }, 00:34:21.630 { 00:34:21.630 "subsystem": "sock", 00:34:21.630 "config": [ 00:34:21.630 { 00:34:21.630 "method": "sock_set_default_impl", 00:34:21.630 "params": { 00:34:21.630 "impl_name": "posix" 00:34:21.630 } 00:34:21.630 }, 00:34:21.630 { 00:34:21.630 "method": "sock_impl_set_options", 00:34:21.630 "params": { 00:34:21.630 "impl_name": "ssl", 00:34:21.630 "recv_buf_size": 4096, 00:34:21.630 "send_buf_size": 4096, 00:34:21.630 "enable_recv_pipe": true, 00:34:21.630 "enable_quickack": false, 00:34:21.630 "enable_placement_id": 0, 00:34:21.630 "enable_zerocopy_send_server": true, 00:34:21.630 "enable_zerocopy_send_client": false, 00:34:21.630 "zerocopy_threshold": 0, 00:34:21.631 "tls_version": 0, 00:34:21.631 "enable_ktls": false 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "sock_impl_set_options", 00:34:21.631 "params": { 00:34:21.631 "impl_name": "posix", 00:34:21.631 "recv_buf_size": 2097152, 00:34:21.631 "send_buf_size": 2097152, 00:34:21.631 "enable_recv_pipe": true, 00:34:21.631 "enable_quickack": false, 00:34:21.631 "enable_placement_id": 0, 00:34:21.631 "enable_zerocopy_send_server": true, 00:34:21.631 "enable_zerocopy_send_client": false, 00:34:21.631 "zerocopy_threshold": 0, 00:34:21.631 "tls_version": 0, 00:34:21.631 "enable_ktls": false 00:34:21.631 } 00:34:21.631 } 00:34:21.631 ] 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "subsystem": "vmd", 00:34:21.631 "config": [] 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "subsystem": "accel", 00:34:21.631 "config": [ 00:34:21.631 { 00:34:21.631 "method": "accel_set_options", 00:34:21.631 "params": { 00:34:21.631 "small_cache_size": 128, 00:34:21.631 "large_cache_size": 16, 00:34:21.631 "task_count": 2048, 00:34:21.631 "sequence_count": 2048, 00:34:21.631 "buf_count": 2048 00:34:21.631 } 00:34:21.631 } 00:34:21.631 ] 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "subsystem": "bdev", 00:34:21.631 "config": [ 00:34:21.631 { 00:34:21.631 "method": "bdev_set_options", 00:34:21.631 "params": { 00:34:21.631 "bdev_io_pool_size": 65535, 00:34:21.631 "bdev_io_cache_size": 256, 00:34:21.631 "bdev_auto_examine": true, 00:34:21.631 "iobuf_small_cache_size": 128, 00:34:21.631 "iobuf_large_cache_size": 16 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "bdev_raid_set_options", 00:34:21.631 "params": { 00:34:21.631 "process_window_size_kb": 1024, 00:34:21.631 "process_max_bandwidth_mb_sec": 0 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "bdev_iscsi_set_options", 00:34:21.631 "params": { 00:34:21.631 "timeout_sec": 30 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "bdev_nvme_set_options", 00:34:21.631 "params": { 00:34:21.631 "action_on_timeout": "none", 00:34:21.631 "timeout_us": 0, 00:34:21.631 "timeout_admin_us": 0, 00:34:21.631 "keep_alive_timeout_ms": 10000, 00:34:21.631 "arbitration_burst": 0, 00:34:21.631 "low_priority_weight": 0, 00:34:21.631 "medium_priority_weight": 0, 00:34:21.631 "high_priority_weight": 0, 00:34:21.631 "nvme_adminq_poll_period_us": 10000, 00:34:21.631 "nvme_ioq_poll_period_us": 0, 00:34:21.631 "io_queue_requests": 0, 00:34:21.631 "delay_cmd_submit": true, 00:34:21.631 "transport_retry_count": 4, 00:34:21.631 "bdev_retry_count": 3, 00:34:21.631 "transport_ack_timeout": 0, 00:34:21.631 "ctrlr_loss_timeout_sec": 0, 00:34:21.631 "reconnect_delay_sec": 0, 00:34:21.631 "fast_io_fail_timeout_sec": 0, 00:34:21.631 "disable_auto_failback": false, 00:34:21.631 "generate_uuids": false, 00:34:21.631 "transport_tos": 0, 00:34:21.631 "nvme_error_stat": false, 00:34:21.631 "rdma_srq_size": 0, 00:34:21.631 "io_path_stat": false, 00:34:21.631 "allow_accel_sequence": false, 00:34:21.631 "rdma_max_cq_size": 0, 00:34:21.631 "rdma_cm_event_timeout_ms": 0, 00:34:21.631 "dhchap_digests": [ 00:34:21.631 "sha256", 00:34:21.631 "sha384", 00:34:21.631 "sha512" 00:34:21.631 ], 00:34:21.631 "dhchap_dhgroups": [ 00:34:21.631 "null", 00:34:21.631 "ffdhe2048", 00:34:21.631 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:21.631 "ffdhe3072", 00:34:21.631 "ffdhe4096", 00:34:21.631 "ffdhe6144", 00:34:21.631 "ffdhe8192" 00:34:21.631 ] 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "bdev_nvme_set_hotplug", 00:34:21.631 "params": { 00:34:21.631 "period_us": 100000, 00:34:21.631 "enable": false 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "bdev_malloc_create", 00:34:21.631 "params": { 00:34:21.631 "name": "malloc0", 00:34:21.631 "num_blocks": 8192, 00:34:21.631 "block_size": 4096, 00:34:21.631 "physical_block_size": 4096, 00:34:21.631 "uuid": "f70ff43d-d76d-4bc7-ad5e-8fe79b9c1261", 00:34:21.631 "optimal_io_boundary": 0, 00:34:21.631 "md_size": 0, 00:34:21.631 "dif_type": 0, 00:34:21.631 "dif_is_head_of_md": false, 00:34:21.631 "dif_pi_format": 0 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "bdev_wait_for_examine" 00:34:21.631 } 00:34:21.631 ] 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "subsystem": "nbd", 00:34:21.631 "config": [] 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "subsystem": "scheduler", 00:34:21.631 "config": [ 00:34:21.631 { 00:34:21.631 "method": "framework_set_scheduler", 00:34:21.631 "params": { 00:34:21.631 "name": "static" 00:34:21.631 } 00:34:21.631 } 00:34:21.631 ] 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "subsystem": "nvmf", 00:34:21.631 "config": [ 00:34:21.631 { 00:34:21.631 "method": "nvmf_set_config", 00:34:21.631 "params": { 00:34:21.631 "discovery_filter": "match_any", 00:34:21.631 "admin_cmd_passthru": { 00:34:21.631 "identify_ctrlr": false 00:34:21.631 } 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "nvmf_set_max_subsystems", 00:34:21.631 "params": { 00:34:21.631 "max_subsystems": 1024 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "nvmf_set_crdt", 00:34:21.631 "params": { 00:34:21.631 "crdt1": 0, 00:34:21.631 "crdt2": 0, 00:34:21.631 "crdt3": 0 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "nvmf_create_transport", 00:34:21.631 "params": { 00:34:21.631 "trtype": "TCP", 00:34:21.631 "max_queue_depth": 128, 00:34:21.631 "max_io_qpairs_per_ctrlr": 127, 00:34:21.631 "in_capsule_data_size": 4096, 00:34:21.631 "max_io_size": 131072, 00:34:21.631 "io_unit_size": 131072, 00:34:21.631 "max_aq_depth": 128, 00:34:21.631 "num_shared_buffers": 511, 00:34:21.631 "buf_cache_size": 4294967295, 00:34:21.631 "dif_insert_or_strip": false, 00:34:21.631 "zcopy": false, 00:34:21.631 "c2h_success": false, 00:34:21.631 "sock_priority": 0, 00:34:21.631 "abort_timeout_sec": 1, 00:34:21.631 "ack_timeout": 0, 00:34:21.631 "data_wr_pool_size": 0 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "nvmf_create_subsystem", 00:34:21.631 "params": { 00:34:21.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.631 "allow_any_host": false, 00:34:21.631 "serial_number": "SPDK00000000000001", 00:34:21.631 "model_number": "SPDK bdev Controller", 00:34:21.631 "max_namespaces": 10, 00:34:21.631 "min_cntlid": 1, 00:34:21.631 "max_cntlid": 65519, 00:34:21.631 "ana_reporting": false 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.631 "method": "nvmf_subsystem_add_host", 00:34:21.631 "params": { 00:34:21.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.631 "host": "nqn.2016-06.io.spdk:host1", 00:34:21.631 "psk": "/tmp/tmp.x8ms3XD6vM" 00:34:21.631 } 00:34:21.631 }, 00:34:21.631 { 00:34:21.632 "method": "nvmf_subsystem_add_ns", 00:34:21.632 "params": { 00:34:21.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.632 "namespace": { 00:34:21.632 "nsid": 1, 00:34:21.632 "bdev_name": "malloc0", 00:34:21.632 "nguid": "F70FF43DD76D4BC7AD5E8FE79B9C1261", 00:34:21.632 "uuid": "f70ff43d-d76d-4bc7-ad5e-8fe79b9c1261", 00:34:21.632 "no_auto_visible": false 00:34:21.632 } 00:34:21.632 } 00:34:21.632 }, 00:34:21.632 { 00:34:21.632 "method": "nvmf_subsystem_add_listener", 00:34:21.632 "params": { 00:34:21.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.632 "listen_address": { 00:34:21.632 "trtype": "TCP", 00:34:21.632 "adrfam": "IPv4", 00:34:21.632 "traddr": "10.0.0.2", 00:34:21.632 "trsvcid": "4420" 00:34:21.632 }, 00:34:21.632 "secure_channel": true 00:34:21.632 } 00:34:21.632 } 00:34:21.632 ] 00:34:21.632 } 00:34:21.632 ] 00:34:21.632 }' 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2410821 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2410821 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2410821 ']' 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:21.632 08:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:21.632 [2024-07-23 08:47:34.137665] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:21.632 [2024-07-23 08:47:34.137970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.891 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.891 [2024-07-23 08:47:34.407485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.459 [2024-07-23 08:47:34.729753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.459 [2024-07-23 08:47:34.729832] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.459 [2024-07-23 08:47:34.729866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.459 [2024-07-23 08:47:34.729897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.459 [2024-07-23 08:47:34.729931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.459 [2024-07-23 08:47:34.730127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.026 [2024-07-23 08:47:35.368691] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.027 [2024-07-23 08:47:35.384635] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:23.027 [2024-07-23 08:47:35.400708] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:23.027 [2024-07-23 08:47:35.401036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2411063 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2411063 /var/tmp/bdevperf.sock 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2411063 ']' 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:23.287 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:34:23.287 "subsystems": [ 00:34:23.287 { 00:34:23.287 "subsystem": "keyring", 00:34:23.287 "config": [] 00:34:23.287 }, 00:34:23.287 { 00:34:23.287 "subsystem": "iobuf", 00:34:23.287 "config": [ 00:34:23.287 { 00:34:23.287 "method": "iobuf_set_options", 00:34:23.287 "params": { 00:34:23.287 "small_pool_count": 8192, 00:34:23.287 "large_pool_count": 1024, 00:34:23.287 "small_bufsize": 8192, 00:34:23.287 "large_bufsize": 135168 00:34:23.287 } 00:34:23.287 } 00:34:23.287 ] 00:34:23.287 }, 00:34:23.287 { 00:34:23.287 "subsystem": "sock", 00:34:23.287 "config": [ 00:34:23.287 { 00:34:23.287 "method": "sock_set_default_impl", 00:34:23.287 "params": { 00:34:23.287 "impl_name": "posix" 00:34:23.287 } 00:34:23.287 }, 00:34:23.287 { 00:34:23.287 "method": "sock_impl_set_options", 00:34:23.287 "params": { 00:34:23.287 "impl_name": "ssl", 00:34:23.287 "recv_buf_size": 4096, 00:34:23.287 "send_buf_size": 4096, 00:34:23.287 "enable_recv_pipe": true, 00:34:23.287 "enable_quickack": false, 00:34:23.287 "enable_placement_id": 0, 00:34:23.287 "enable_zerocopy_send_server": true, 00:34:23.287 "enable_zerocopy_send_client": false, 00:34:23.287 "zerocopy_threshold": 0, 00:34:23.287 "tls_version": 0, 00:34:23.287 "enable_ktls": false 00:34:23.287 } 00:34:23.287 }, 00:34:23.287 { 00:34:23.287 "method": "sock_impl_set_options", 00:34:23.287 "params": { 00:34:23.287 "impl_name": "posix", 00:34:23.287 "recv_buf_size": 2097152, 00:34:23.287 "send_buf_size": 2097152, 00:34:23.287 "enable_recv_pipe": true, 00:34:23.287 "enable_quickack": false, 00:34:23.287 "enable_placement_id": 0, 00:34:23.287 "enable_zerocopy_send_server": true, 00:34:23.287 "enable_zerocopy_send_client": false, 00:34:23.288 "zerocopy_threshold": 0, 00:34:23.288 "tls_version": 0, 00:34:23.288 "enable_ktls": false 00:34:23.288 } 00:34:23.288 } 00:34:23.288 ] 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "subsystem": "vmd", 00:34:23.288 "config": [] 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "subsystem": "accel", 00:34:23.288 "config": [ 00:34:23.288 { 00:34:23.288 "method": "accel_set_options", 00:34:23.288 "params": { 00:34:23.288 "small_cache_size": 128, 00:34:23.288 "large_cache_size": 16, 00:34:23.288 "task_count": 2048, 00:34:23.288 "sequence_count": 2048, 00:34:23.288 "buf_count": 2048 00:34:23.288 } 00:34:23.288 } 00:34:23.288 ] 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "subsystem": "bdev", 00:34:23.288 "config": [ 00:34:23.288 { 00:34:23.288 "method": "bdev_set_options", 00:34:23.288 "params": { 00:34:23.288 "bdev_io_pool_size": 65535, 00:34:23.288 "bdev_io_cache_size": 256, 00:34:23.288 "bdev_auto_examine": true, 00:34:23.288 "iobuf_small_cache_size": 128, 00:34:23.288 "iobuf_large_cache_size": 16 00:34:23.288 } 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "method": "bdev_raid_set_options", 00:34:23.288 "params": { 00:34:23.288 "process_window_size_kb": 1024, 00:34:23.288 "process_max_bandwidth_mb_sec": 0 00:34:23.288 } 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "method": "bdev_iscsi_set_options", 00:34:23.288 "params": { 00:34:23.288 "timeout_sec": 30 00:34:23.288 } 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "method": "bdev_nvme_set_options", 00:34:23.288 "params": { 00:34:23.288 "action_on_timeout": "none", 00:34:23.288 "timeout_us": 0, 00:34:23.288 "timeout_admin_us": 0, 00:34:23.288 "keep_alive_timeout_ms": 10000, 00:34:23.288 "arbitration_burst": 0, 00:34:23.288 "low_priority_weight": 0, 00:34:23.288 "medium_priority_weight": 0, 00:34:23.288 "high_priority_weight": 0, 00:34:23.288 "nvme_adminq_poll_period_us": 10000, 00:34:23.288 "nvme_ioq_poll_period_us": 0, 00:34:23.288 "io_queue_requests": 512, 00:34:23.288 "delay_cmd_submit": true, 00:34:23.288 "transport_retry_count": 4, 00:34:23.288 "bdev_retry_count": 3, 00:34:23.288 "transport_ack_timeout": 0, 00:34:23.288 "ctrlr_loss_timeout_sec": 0, 00:34:23.288 "reconnect_delay_sec": 0, 00:34:23.288 "fast_io_fail_timeout_sec": 0, 00:34:23.288 "disable_auto_failback": false, 00:34:23.288 "generate_uuids": false, 00:34:23.288 "transport_tos": 0, 00:34:23.288 "nvme_error_stat": false, 00:34:23.288 "rdma_srq_size": 0, 00:34:23.288 "io_path_stat": false, 00:34:23.288 "allow_accel_sequence": false, 00:34:23.288 "rdma_max_cq_size": 0, 00:34:23.288 "rdma_cm_event_timeout_ms": 0, 00:34:23.288 "dhchap_digests": [ 00:34:23.288 "sha256", 00:34:23.288 "sha384", 00:34:23.288 "sha512" 00:34:23.288 ], 00:34:23.288 "dhchap_dhgroups": [ 00:34:23.288 "null", 00:34:23.288 "ffdhe2048", 00:34:23.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:23.288 "ffdhe3072", 00:34:23.288 "ffdhe4096", 00:34:23.288 "ffdhe6144", 00:34:23.288 "ffdhe8192" 00:34:23.288 ] 00:34:23.288 } 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "method": "bdev_nvme_attach_controller", 00:34:23.288 "params": { 00:34:23.288 "name": "TLSTEST", 00:34:23.288 "trtype": "TCP", 00:34:23.288 "adrfam": "IPv4", 00:34:23.288 "traddr": "10.0.0.2", 00:34:23.288 "trsvcid": "4420", 00:34:23.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.288 "prchk_reftag": false, 00:34:23.288 "prchk_guard": false, 00:34:23.288 "ctrlr_loss_timeout_sec": 0, 00:34:23.288 "reconnect_delay_sec": 0, 00:34:23.288 "fast_io_fail_timeout_sec": 0, 00:34:23.288 "psk": "/tmp/tmp.x8ms3XD6vM", 00:34:23.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.288 "hdgst": false, 00:34:23.288 "ddgst": false 00:34:23.288 } 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "method": "bdev_nvme_set_hotplug", 00:34:23.288 "params": { 00:34:23.288 "period_us": 100000, 00:34:23.288 "enable": false 00:34:23.288 } 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "method": "bdev_wait_for_examine" 00:34:23.288 } 00:34:23.288 ] 00:34:23.288 }, 00:34:23.288 { 00:34:23.288 "subsystem": "nbd", 00:34:23.288 "config": [] 00:34:23.288 } 00:34:23.288 ] 00:34:23.288 }' 00:34:23.288 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:23.288 08:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:23.549 [2024-07-23 08:47:35.909257] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:23.549 [2024-07-23 08:47:35.909611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411063 ] 00:34:23.808 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.808 [2024-07-23 08:47:36.172749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.068 [2024-07-23 08:47:36.492532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:24.638 [2024-07-23 08:47:36.996697] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:24.638 [2024-07-23 08:47:36.996922] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:25.207 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:25.207 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:25.207 08:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:34:25.207 Running I/O for 10 seconds... 00:34:37.428 00:34:37.429 Latency(us) 00:34:37.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.429 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:37.429 Verification LBA range: start 0x0 length 0x2000 00:34:37.429 TLSTESTn1 : 10.03 1952.42 7.63 0.00 0.00 65403.20 11553.75 67574.90 00:34:37.429 =================================================================================================================== 00:34:37.429 Total : 1952.42 7.63 0.00 0.00 65403.20 11553.75 67574.90 00:34:37.429 0 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2411063 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2411063 ']' 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2411063 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2411063 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2411063' 00:34:37.429 killing process with pid 2411063 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2411063 00:34:37.429 Received shutdown signal, test time was about 10.000000 seconds 00:34:37.429 00:34:37.429 Latency(us) 00:34:37.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.429 =================================================================================================================== 00:34:37.429 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:37.429 [2024-07-23 08:47:47.856015] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:37.429 08:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2411063 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2410821 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2410821 ']' 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2410821 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410821 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410821' 00:34:37.429 killing process with pid 2410821 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2410821 00:34:37.429 [2024-07-23 08:47:49.248981] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:37.429 08:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2410821 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2412652 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2412652 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2412652 ']' 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:38.808 08:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:38.808 [2024-07-23 08:47:51.212165] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:38.808 [2024-07-23 08:47:51.212458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.067 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.067 [2024-07-23 08:47:51.564840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.007 [2024-07-23 08:47:52.333859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.007 [2024-07-23 08:47:52.334011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.007 [2024-07-23 08:47:52.334104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.007 [2024-07-23 08:47:52.334190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.007 [2024-07-23 08:47:52.334274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.007 [2024-07-23 08:47:52.334427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.x8ms3XD6vM 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x8ms3XD6vM 00:34:40.588 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:41.157 [2024-07-23 08:47:53.428540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.157 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:41.416 08:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:34:41.986 [2024-07-23 08:47:54.396176] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:41.986 [2024-07-23 08:47:54.396702] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.986 08:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:34:42.555 malloc0 00:34:42.555 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:43.125 08:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x8ms3XD6vM 00:34:43.696 [2024-07-23 08:47:56.038275] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2413204 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2413204 /var/tmp/bdevperf.sock 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2413204 ']' 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:43.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:43.696 08:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:43.696 [2024-07-23 08:47:56.203855] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:43.696 [2024-07-23 08:47:56.204101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413204 ] 00:34:43.955 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.955 [2024-07-23 08:47:56.412972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.216 [2024-07-23 08:47:56.724475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.599 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:45.599 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:45.599 08:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.x8ms3XD6vM 00:34:45.858 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:34:46.428 [2024-07-23 08:47:58.871071] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:46.688 nvme0n1 00:34:46.688 08:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:46.949 Running I/O for 1 seconds... 00:34:47.890 00:34:47.890 Latency(us) 00:34:47.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.890 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:47.890 Verification LBA range: start 0x0 length 0x2000 00:34:47.890 nvme0n1 : 1.04 1933.59 7.55 0.00 0.00 65071.24 11408.12 44855.75 00:34:47.890 =================================================================================================================== 00:34:47.890 Total : 1933.59 7.55 0.00 0.00 65071.24 11408.12 44855.75 00:34:47.890 0 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2413204 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2413204 ']' 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2413204 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2413204 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2413204' 00:34:47.890 killing process with pid 2413204 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2413204 00:34:47.890 Received shutdown signal, test time was about 1.000000 seconds 00:34:47.890 00:34:47.890 Latency(us) 00:34:47.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.890 =================================================================================================================== 00:34:47.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:47.890 08:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2413204 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2412652 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2412652 ']' 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2412652 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2412652 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2412652' 00:34:49.272 killing process with pid 2412652 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2412652 00:34:49.272 [2024-07-23 08:48:01.706461] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:49.272 08:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2412652 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2414239 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2414239 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2414239 ']' 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:52.570 08:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:52.570 [2024-07-23 08:48:04.454173] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:52.570 [2024-07-23 08:48:04.454356] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.570 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.570 [2024-07-23 08:48:04.622176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.570 [2024-07-23 08:48:05.056709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.570 [2024-07-23 08:48:05.056849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.570 [2024-07-23 08:48:05.056913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.570 [2024-07-23 08:48:05.056968] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.570 [2024-07-23 08:48:05.057020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.570 [2024-07-23 08:48:05.057119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.510 08:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:53.511 [2024-07-23 08:48:05.891044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.511 malloc0 00:34:53.511 [2024-07-23 08:48:06.008628] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:53.511 [2024-07-23 08:48:06.009247] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2414473 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2414473 /var/tmp/bdevperf.sock 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2414473 ']' 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:53.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:53.771 08:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:53.771 [2024-07-23 08:48:06.217602] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:53.771 [2024-07-23 08:48:06.217940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414473 ] 00:34:54.031 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.031 [2024-07-23 08:48:06.474695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.290 [2024-07-23 08:48:06.787212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.228 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:55.228 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:34:55.228 08:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.x8ms3XD6vM 00:34:55.799 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:34:56.070 [2024-07-23 08:48:08.535010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:56.341 nvme0n1 00:34:56.341 08:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:56.601 Running I/O for 1 seconds... 00:34:57.543 00:34:57.543 Latency(us) 00:34:57.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.543 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:57.543 Verification LBA range: start 0x0 length 0x2000 00:34:57.543 nvme0n1 : 1.04 1922.31 7.51 0.00 0.00 65388.03 12136.30 48156.82 00:34:57.543 =================================================================================================================== 00:34:57.543 Total : 1922.31 7.51 0.00 0.00 65388.03 12136.30 48156.82 00:34:57.543 0 00:34:57.544 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:34:57.544 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.544 08:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:57.544 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.544 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:34:57.544 "subsystems": [ 00:34:57.544 { 00:34:57.544 "subsystem": "keyring", 00:34:57.544 "config": [ 00:34:57.544 { 00:34:57.544 "method": "keyring_file_add_key", 00:34:57.544 "params": { 00:34:57.544 "name": "key0", 00:34:57.544 "path": "/tmp/tmp.x8ms3XD6vM" 00:34:57.544 } 00:34:57.544 } 00:34:57.544 ] 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "subsystem": "iobuf", 00:34:57.544 "config": [ 00:34:57.544 { 00:34:57.544 "method": "iobuf_set_options", 00:34:57.544 "params": { 00:34:57.544 "small_pool_count": 8192, 00:34:57.544 "large_pool_count": 1024, 00:34:57.544 "small_bufsize": 8192, 00:34:57.544 "large_bufsize": 135168 00:34:57.544 } 00:34:57.544 } 00:34:57.544 ] 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "subsystem": "sock", 00:34:57.544 "config": [ 00:34:57.544 { 00:34:57.544 "method": "sock_set_default_impl", 00:34:57.544 "params": { 00:34:57.544 "impl_name": "posix" 00:34:57.544 } 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "method": "sock_impl_set_options", 00:34:57.544 "params": { 00:34:57.544 "impl_name": "ssl", 00:34:57.544 "recv_buf_size": 4096, 00:34:57.544 "send_buf_size": 4096, 00:34:57.544 "enable_recv_pipe": true, 00:34:57.544 "enable_quickack": false, 00:34:57.544 "enable_placement_id": 0, 00:34:57.544 "enable_zerocopy_send_server": true, 00:34:57.544 "enable_zerocopy_send_client": false, 00:34:57.544 "zerocopy_threshold": 0, 00:34:57.544 "tls_version": 0, 00:34:57.544 "enable_ktls": false 00:34:57.544 } 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "method": "sock_impl_set_options", 00:34:57.544 "params": { 00:34:57.544 "impl_name": "posix", 00:34:57.544 "recv_buf_size": 2097152, 00:34:57.544 "send_buf_size": 2097152, 00:34:57.544 "enable_recv_pipe": true, 00:34:57.544 "enable_quickack": false, 00:34:57.544 "enable_placement_id": 0, 00:34:57.544 "enable_zerocopy_send_server": true, 00:34:57.544 "enable_zerocopy_send_client": false, 00:34:57.544 "zerocopy_threshold": 0, 00:34:57.544 "tls_version": 0, 00:34:57.544 "enable_ktls": false 00:34:57.544 } 00:34:57.544 } 00:34:57.544 ] 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "subsystem": "vmd", 00:34:57.544 "config": [] 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "subsystem": "accel", 00:34:57.544 "config": [ 00:34:57.544 { 00:34:57.544 "method": "accel_set_options", 00:34:57.544 "params": { 00:34:57.544 "small_cache_size": 128, 00:34:57.544 "large_cache_size": 16, 00:34:57.544 "task_count": 2048, 00:34:57.544 "sequence_count": 2048, 00:34:57.544 "buf_count": 2048 00:34:57.544 } 00:34:57.544 } 00:34:57.544 ] 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "subsystem": "bdev", 00:34:57.544 "config": [ 00:34:57.544 { 00:34:57.544 "method": "bdev_set_options", 00:34:57.544 "params": { 00:34:57.544 "bdev_io_pool_size": 65535, 00:34:57.544 "bdev_io_cache_size": 256, 00:34:57.544 "bdev_auto_examine": true, 00:34:57.544 "iobuf_small_cache_size": 128, 00:34:57.544 "iobuf_large_cache_size": 16 00:34:57.544 } 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "method": "bdev_raid_set_options", 00:34:57.544 "params": { 00:34:57.544 "process_window_size_kb": 1024, 00:34:57.544 "process_max_bandwidth_mb_sec": 0 00:34:57.544 } 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "method": "bdev_iscsi_set_options", 00:34:57.544 "params": { 00:34:57.544 "timeout_sec": 30 00:34:57.544 } 00:34:57.544 }, 00:34:57.544 { 00:34:57.544 "method": "bdev_nvme_set_options", 00:34:57.544 "params": { 00:34:57.544 "action_on_timeout": "none", 00:34:57.544 "timeout_us": 0, 00:34:57.544 "timeout_admin_us": 0, 00:34:57.544 "keep_alive_timeout_ms": 10000, 00:34:57.544 "arbitration_burst": 0, 00:34:57.544 "low_priority_weight": 0, 00:34:57.544 "medium_priority_weight": 0, 00:34:57.544 "high_priority_weight": 0, 00:34:57.544 "nvme_adminq_poll_period_us": 10000, 00:34:57.544 "nvme_ioq_poll_period_us": 0, 00:34:57.544 "io_queue_requests": 0, 00:34:57.544 "delay_cmd_submit": true, 00:34:57.544 "transport_retry_count": 4, 00:34:57.544 "bdev_retry_count": 3, 00:34:57.544 "transport_ack_timeout": 0, 00:34:57.544 "ctrlr_loss_timeout_sec": 0, 00:34:57.544 "reconnect_delay_sec": 0, 00:34:57.544 "fast_io_fail_timeout_sec": 0, 00:34:57.544 "disable_auto_failback": false, 00:34:57.544 "generate_uuids": false, 00:34:57.544 "transport_tos": 0, 00:34:57.544 "nvme_error_stat": false, 00:34:57.544 "rdma_srq_size": 0, 00:34:57.544 "io_path_stat": false, 00:34:57.544 "allow_accel_sequence": false, 00:34:57.544 "rdma_max_cq_size": 0, 00:34:57.544 "rdma_cm_event_timeout_ms": 0, 00:34:57.544 "dhchap_digests": [ 00:34:57.544 "sha256", 00:34:57.544 "sha384", 00:34:57.544 "sha512" 00:34:57.544 ], 00:34:57.544 "dhchap_dhgroups": [ 00:34:57.544 "null", 00:34:57.544 "ffdhe2048", 00:34:57.545 "ffdhe3072", 00:34:57.545 "ffdhe4096", 00:34:57.545 "ffdhe6144", 00:34:57.545 "ffdhe8192" 00:34:57.545 ] 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "bdev_nvme_set_hotplug", 00:34:57.545 "params": { 00:34:57.545 "period_us": 100000, 00:34:57.545 "enable": false 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "bdev_malloc_create", 00:34:57.545 "params": { 00:34:57.545 "name": "malloc0", 00:34:57.545 "num_blocks": 8192, 00:34:57.545 "block_size": 4096, 00:34:57.545 "physical_block_size": 4096, 00:34:57.545 "uuid": "d7dabb79-22d5-431c-a7c4-143062d7f6f9", 00:34:57.545 "optimal_io_boundary": 0, 00:34:57.545 "md_size": 0, 00:34:57.545 "dif_type": 0, 00:34:57.545 "dif_is_head_of_md": false, 00:34:57.545 "dif_pi_format": 0 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "bdev_wait_for_examine" 00:34:57.545 } 00:34:57.545 ] 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "subsystem": "nbd", 00:34:57.545 "config": [] 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "subsystem": "scheduler", 00:34:57.545 "config": [ 00:34:57.545 { 00:34:57.545 "method": "framework_set_scheduler", 00:34:57.545 "params": { 00:34:57.545 "name": "static" 00:34:57.545 } 00:34:57.545 } 00:34:57.545 ] 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "subsystem": "nvmf", 00:34:57.545 "config": [ 00:34:57.545 { 00:34:57.545 "method": "nvmf_set_config", 00:34:57.545 "params": { 00:34:57.545 "discovery_filter": "match_any", 00:34:57.545 "admin_cmd_passthru": { 00:34:57.545 "identify_ctrlr": false 00:34:57.545 } 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "nvmf_set_max_subsystems", 00:34:57.545 "params": { 00:34:57.545 "max_subsystems": 1024 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "nvmf_set_crdt", 00:34:57.545 "params": { 00:34:57.545 "crdt1": 0, 00:34:57.545 "crdt2": 0, 00:34:57.545 "crdt3": 0 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "nvmf_create_transport", 00:34:57.545 "params": { 00:34:57.545 "trtype": "TCP", 00:34:57.545 "max_queue_depth": 128, 00:34:57.545 "max_io_qpairs_per_ctrlr": 127, 00:34:57.545 "in_capsule_data_size": 4096, 00:34:57.545 "max_io_size": 131072, 00:34:57.545 "io_unit_size": 131072, 00:34:57.545 "max_aq_depth": 128, 00:34:57.545 "num_shared_buffers": 511, 00:34:57.545 "buf_cache_size": 4294967295, 00:34:57.545 "dif_insert_or_strip": false, 00:34:57.545 "zcopy": false, 00:34:57.545 "c2h_success": false, 00:34:57.545 "sock_priority": 0, 00:34:57.545 "abort_timeout_sec": 1, 00:34:57.545 "ack_timeout": 0, 00:34:57.545 "data_wr_pool_size": 0 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "nvmf_create_subsystem", 00:34:57.545 "params": { 00:34:57.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.545 "allow_any_host": false, 00:34:57.545 "serial_number": "00000000000000000000", 00:34:57.545 "model_number": "SPDK bdev Controller", 00:34:57.545 "max_namespaces": 32, 00:34:57.545 "min_cntlid": 1, 00:34:57.545 "max_cntlid": 65519, 00:34:57.545 "ana_reporting": false 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "nvmf_subsystem_add_host", 00:34:57.545 "params": { 00:34:57.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.545 "host": "nqn.2016-06.io.spdk:host1", 00:34:57.545 "psk": "key0" 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "nvmf_subsystem_add_ns", 00:34:57.545 "params": { 00:34:57.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.545 "namespace": { 00:34:57.545 "nsid": 1, 00:34:57.545 "bdev_name": "malloc0", 00:34:57.545 "nguid": "D7DABB7922D5431CA7C4143062D7F6F9", 00:34:57.545 "uuid": "d7dabb79-22d5-431c-a7c4-143062d7f6f9", 00:34:57.545 "no_auto_visible": false 00:34:57.545 } 00:34:57.545 } 00:34:57.545 }, 00:34:57.545 { 00:34:57.545 "method": "nvmf_subsystem_add_listener", 00:34:57.545 "params": { 00:34:57.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.545 "listen_address": { 00:34:57.545 "trtype": "TCP", 00:34:57.545 "adrfam": "IPv4", 00:34:57.545 "traddr": "10.0.0.2", 00:34:57.545 "trsvcid": "4420" 00:34:57.545 }, 00:34:57.545 "secure_channel": false, 00:34:57.545 "sock_impl": "ssl" 00:34:57.545 } 00:34:57.545 } 00:34:57.545 ] 00:34:57.545 } 00:34:57.545 ] 00:34:57.545 }' 00:34:57.545 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:34:58.116 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:34:58.116 "subsystems": [ 00:34:58.116 { 00:34:58.116 "subsystem": "keyring", 00:34:58.116 "config": [ 00:34:58.116 { 00:34:58.116 "method": "keyring_file_add_key", 00:34:58.116 "params": { 00:34:58.116 "name": "key0", 00:34:58.116 "path": "/tmp/tmp.x8ms3XD6vM" 00:34:58.116 } 00:34:58.116 } 00:34:58.116 ] 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "subsystem": "iobuf", 00:34:58.116 "config": [ 00:34:58.116 { 00:34:58.116 "method": "iobuf_set_options", 00:34:58.116 "params": { 00:34:58.116 "small_pool_count": 8192, 00:34:58.116 "large_pool_count": 1024, 00:34:58.116 "small_bufsize": 8192, 00:34:58.116 "large_bufsize": 135168 00:34:58.116 } 00:34:58.116 } 00:34:58.116 ] 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "subsystem": "sock", 00:34:58.116 "config": [ 00:34:58.116 { 00:34:58.116 "method": "sock_set_default_impl", 00:34:58.116 "params": { 00:34:58.116 "impl_name": "posix" 00:34:58.116 } 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "method": "sock_impl_set_options", 00:34:58.116 "params": { 00:34:58.116 "impl_name": "ssl", 00:34:58.116 "recv_buf_size": 4096, 00:34:58.116 "send_buf_size": 4096, 00:34:58.116 "enable_recv_pipe": true, 00:34:58.116 "enable_quickack": false, 00:34:58.116 "enable_placement_id": 0, 00:34:58.116 "enable_zerocopy_send_server": true, 00:34:58.116 "enable_zerocopy_send_client": false, 00:34:58.116 "zerocopy_threshold": 0, 00:34:58.116 "tls_version": 0, 00:34:58.116 "enable_ktls": false 00:34:58.116 } 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "method": "sock_impl_set_options", 00:34:58.116 "params": { 00:34:58.116 "impl_name": "posix", 00:34:58.116 "recv_buf_size": 2097152, 00:34:58.116 "send_buf_size": 2097152, 00:34:58.116 "enable_recv_pipe": true, 00:34:58.116 "enable_quickack": false, 00:34:58.116 "enable_placement_id": 0, 00:34:58.116 "enable_zerocopy_send_server": true, 00:34:58.116 "enable_zerocopy_send_client": false, 00:34:58.116 "zerocopy_threshold": 0, 00:34:58.116 "tls_version": 0, 00:34:58.116 "enable_ktls": false 00:34:58.116 } 00:34:58.116 } 00:34:58.116 ] 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "subsystem": "vmd", 00:34:58.116 "config": [] 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "subsystem": "accel", 00:34:58.116 "config": [ 00:34:58.116 { 00:34:58.116 "method": "accel_set_options", 00:34:58.116 "params": { 00:34:58.116 "small_cache_size": 128, 00:34:58.116 "large_cache_size": 16, 00:34:58.116 "task_count": 2048, 00:34:58.116 "sequence_count": 2048, 00:34:58.116 "buf_count": 2048 00:34:58.116 } 00:34:58.116 } 00:34:58.116 ] 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "subsystem": "bdev", 00:34:58.116 "config": [ 00:34:58.116 { 00:34:58.116 "method": "bdev_set_options", 00:34:58.116 "params": { 00:34:58.116 "bdev_io_pool_size": 65535, 00:34:58.116 "bdev_io_cache_size": 256, 00:34:58.116 "bdev_auto_examine": true, 00:34:58.116 "iobuf_small_cache_size": 128, 00:34:58.116 "iobuf_large_cache_size": 16 00:34:58.116 } 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "method": "bdev_raid_set_options", 00:34:58.116 "params": { 00:34:58.116 "process_window_size_kb": 1024, 00:34:58.116 "process_max_bandwidth_mb_sec": 0 00:34:58.116 } 00:34:58.116 }, 00:34:58.116 { 00:34:58.116 "method": "bdev_iscsi_set_options", 00:34:58.116 "params": { 00:34:58.116 "timeout_sec": 30 00:34:58.116 } 00:34:58.117 }, 00:34:58.117 { 00:34:58.117 "method": "bdev_nvme_set_options", 00:34:58.117 "params": { 00:34:58.117 "action_on_timeout": "none", 00:34:58.117 "timeout_us": 0, 00:34:58.117 "timeout_admin_us": 0, 00:34:58.117 "keep_alive_timeout_ms": 10000, 00:34:58.117 "arbitration_burst": 0, 00:34:58.117 "low_priority_weight": 0, 00:34:58.117 "medium_priority_weight": 0, 00:34:58.117 "high_priority_weight": 0, 00:34:58.117 "nvme_adminq_poll_period_us": 10000, 00:34:58.117 "nvme_ioq_poll_period_us": 0, 00:34:58.117 "io_queue_requests": 512, 00:34:58.117 "delay_cmd_submit": true, 00:34:58.117 "transport_retry_count": 4, 00:34:58.117 "bdev_retry_count": 3, 00:34:58.117 "transport_ack_timeout": 0, 00:34:58.117 "ctrlr_loss_timeout_sec": 0, 00:34:58.117 "reconnect_delay_sec": 0, 00:34:58.117 "fast_io_fail_timeout_sec": 0, 00:34:58.117 "disable_auto_failback": false, 00:34:58.117 "generate_uuids": false, 00:34:58.117 "transport_tos": 0, 00:34:58.117 "nvme_error_stat": false, 00:34:58.117 "rdma_srq_size": 0, 00:34:58.117 "io_path_stat": false, 00:34:58.117 "allow_accel_sequence": false, 00:34:58.117 "rdma_max_cq_size": 0, 00:34:58.117 "rdma_cm_event_timeout_ms": 0, 00:34:58.117 "dhchap_digests": [ 00:34:58.117 "sha256", 00:34:58.117 "sha384", 00:34:58.117 "sha512" 00:34:58.117 ], 00:34:58.117 "dhchap_dhgroups": [ 00:34:58.117 "null", 00:34:58.117 "ffdhe2048", 00:34:58.117 "ffdhe3072", 00:34:58.117 "ffdhe4096", 00:34:58.117 "ffdhe6144", 00:34:58.117 "ffdhe8192" 00:34:58.117 ] 00:34:58.117 } 00:34:58.117 }, 00:34:58.117 { 00:34:58.117 "method": "bdev_nvme_attach_controller", 00:34:58.117 "params": { 00:34:58.117 "name": "nvme0", 00:34:58.117 "trtype": "TCP", 00:34:58.117 "adrfam": "IPv4", 00:34:58.117 "traddr": "10.0.0.2", 00:34:58.117 "trsvcid": "4420", 00:34:58.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.117 "prchk_reftag": false, 00:34:58.117 "prchk_guard": false, 00:34:58.117 "ctrlr_loss_timeout_sec": 0, 00:34:58.117 "reconnect_delay_sec": 0, 00:34:58.117 "fast_io_fail_timeout_sec": 0, 00:34:58.117 "psk": "key0", 00:34:58.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.117 "hdgst": false, 00:34:58.117 "ddgst": false 00:34:58.117 } 00:34:58.117 }, 00:34:58.117 { 00:34:58.117 "method": "bdev_nvme_set_hotplug", 00:34:58.117 "params": { 00:34:58.117 "period_us": 100000, 00:34:58.117 "enable": false 00:34:58.117 } 00:34:58.117 }, 00:34:58.117 { 00:34:58.117 "method": "bdev_enable_histogram", 00:34:58.117 "params": { 00:34:58.117 "name": "nvme0n1", 00:34:58.117 "enable": true 00:34:58.117 } 00:34:58.117 }, 00:34:58.117 { 00:34:58.117 "method": "bdev_wait_for_examine" 00:34:58.117 } 00:34:58.117 ] 00:34:58.117 }, 00:34:58.117 { 00:34:58.117 "subsystem": "nbd", 00:34:58.117 "config": [] 00:34:58.117 } 00:34:58.117 ] 00:34:58.117 }' 00:34:58.117 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2414473 00:34:58.117 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2414473 ']' 00:34:58.117 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2414473 00:34:58.117 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:58.117 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:58.117 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2414473 00:34:58.377 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:58.377 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:58.377 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2414473' 00:34:58.377 killing process with pid 2414473 00:34:58.377 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2414473 00:34:58.377 Received shutdown signal, test time was about 1.000000 seconds 00:34:58.377 00:34:58.377 Latency(us) 00:34:58.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.377 =================================================================================================================== 00:34:58.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.377 08:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2414473 00:34:59.759 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2414239 00:34:59.760 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2414239 ']' 00:34:59.760 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2414239 00:34:59.760 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:34:59.760 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:59.760 08:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2414239 00:34:59.760 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:59.760 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:59.760 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2414239' 00:34:59.760 killing process with pid 2414239 00:34:59.760 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2414239 00:34:59.760 08:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2414239 00:35:02.300 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:35:02.300 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:02.300 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:35:02.300 "subsystems": [ 00:35:02.300 { 00:35:02.300 "subsystem": "keyring", 00:35:02.300 "config": [ 00:35:02.300 { 00:35:02.300 "method": "keyring_file_add_key", 00:35:02.300 "params": { 00:35:02.300 "name": "key0", 00:35:02.300 "path": "/tmp/tmp.x8ms3XD6vM" 00:35:02.300 } 00:35:02.300 } 00:35:02.300 ] 00:35:02.300 }, 00:35:02.300 { 00:35:02.300 "subsystem": "iobuf", 00:35:02.300 "config": [ 00:35:02.300 { 00:35:02.300 "method": "iobuf_set_options", 00:35:02.300 "params": { 00:35:02.300 "small_pool_count": 8192, 00:35:02.300 "large_pool_count": 1024, 00:35:02.300 "small_bufsize": 8192, 00:35:02.300 "large_bufsize": 135168 00:35:02.300 } 00:35:02.300 } 00:35:02.300 ] 00:35:02.300 }, 00:35:02.300 { 00:35:02.300 "subsystem": "sock", 00:35:02.300 "config": [ 00:35:02.300 { 00:35:02.300 "method": "sock_set_default_impl", 00:35:02.300 "params": { 00:35:02.300 "impl_name": "posix" 00:35:02.300 } 00:35:02.300 }, 00:35:02.300 { 00:35:02.300 "method": "sock_impl_set_options", 00:35:02.300 "params": { 00:35:02.300 "impl_name": "ssl", 00:35:02.300 "recv_buf_size": 4096, 00:35:02.300 "send_buf_size": 4096, 00:35:02.300 "enable_recv_pipe": true, 00:35:02.300 "enable_quickack": false, 00:35:02.300 "enable_placement_id": 0, 00:35:02.300 "enable_zerocopy_send_server": true, 00:35:02.300 "enable_zerocopy_send_client": false, 00:35:02.300 "zerocopy_threshold": 0, 00:35:02.300 "tls_version": 0, 00:35:02.300 "enable_ktls": false 00:35:02.300 } 00:35:02.300 }, 00:35:02.301 { 00:35:02.301 "method": "sock_impl_set_options", 00:35:02.301 "params": { 00:35:02.301 "impl_name": "posix", 00:35:02.301 "recv_buf_size": 2097152, 00:35:02.301 "send_buf_size": 2097152, 00:35:02.301 "enable_recv_pipe": true, 00:35:02.301 "enable_quickack": false, 00:35:02.301 "enable_placement_id": 0, 00:35:02.301 "enable_zerocopy_send_server": true, 00:35:02.301 "enable_zerocopy_send_client": false, 00:35:02.301 "zerocopy_threshold": 0, 00:35:02.301 "tls_version": 0, 00:35:02.301 "enable_ktls": false 00:35:02.301 } 00:35:02.301 } 00:35:02.301 ] 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "subsystem": "vmd", 00:35:02.301 "config": [] 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "subsystem": "accel", 00:35:02.301 "config": [ 00:35:02.301 { 00:35:02.301 "method": "accel_set_options", 00:35:02.301 "params": { 00:35:02.301 "small_cache_size": 128, 00:35:02.301 "large_cache_size": 16, 00:35:02.301 "task_count": 2048, 00:35:02.301 "sequence_count": 2048, 00:35:02.301 "buf_count": 2048 00:35:02.301 } 00:35:02.301 } 00:35:02.301 ] 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "subsystem": "bdev", 00:35:02.301 "config": [ 00:35:02.301 { 00:35:02.301 "method": "bdev_set_options", 00:35:02.301 "params": { 00:35:02.301 "bdev_io_pool_size": 65535, 00:35:02.301 "bdev_io_cache_size": 256, 00:35:02.301 "bdev_auto_examine": true, 00:35:02.301 "iobuf_small_cache_size": 128, 00:35:02.301 "iobuf_large_cache_size": 16 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "bdev_raid_set_options", 00:35:02.301 "params": { 00:35:02.301 "process_window_size_kb": 1024, 00:35:02.301 "process_max_bandwidth_mb_sec": 0 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "bdev_iscsi_set_options", 00:35:02.301 "params": { 00:35:02.301 "timeout_sec": 30 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "bdev_nvme_set_options", 00:35:02.301 "params": { 00:35:02.301 "action_on_timeout": "none", 00:35:02.301 "timeout_us": 0, 00:35:02.301 "timeout_admin_us": 0, 00:35:02.301 "keep_alive_timeout_ms": 10000, 00:35:02.301 "arbitration_burst": 0, 00:35:02.301 "low_priority_weight": 0, 00:35:02.301 "medium_priority_weight": 0, 00:35:02.301 "high_priority_weight": 0, 00:35:02.301 "nvme_adminq_poll_period_us": 10000, 00:35:02.301 "nvme_ioq_poll_period_us": 0, 00:35:02.301 "io_queue_requests": 0, 00:35:02.301 "delay_cmd_submit": true, 00:35:02.301 "transport_retry_count": 4, 00:35:02.301 "bdev_retry_count": 3, 00:35:02.301 "transport_ack_timeout": 0, 00:35:02.301 "ctrlr_loss_timeout_sec": 0, 00:35:02.301 "reconnect_delay_sec": 0, 00:35:02.301 "fast_io_fail_timeout_sec": 0, 00:35:02.301 "disable_auto_failback": false, 00:35:02.301 "generate_uuids": false, 00:35:02.301 "transport_tos": 0, 00:35:02.301 "nvme_error_stat": false, 00:35:02.301 "rdma_srq_size": 0, 00:35:02.301 "io_path_stat": false, 00:35:02.301 "allow_accel_sequence": false, 00:35:02.301 "rdma_max_cq_size": 0, 00:35:02.301 "rdma_cm_event_timeout_ms": 0, 00:35:02.301 "dhchap_digests": [ 00:35:02.301 "sha256", 00:35:02.301 "sha384", 00:35:02.301 "sha512" 00:35:02.301 ], 00:35:02.301 "dhchap_dhgroups": [ 00:35:02.301 "null", 00:35:02.301 "ffdhe2048", 00:35:02.301 "ffdhe3072", 00:35:02.301 "ffdhe4096", 00:35:02.301 "ffdhe6144", 00:35:02.301 "ffdhe8192" 00:35:02.301 ] 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "bdev_nvme_set_hotplug", 00:35:02.301 "params": { 00:35:02.301 "period_us": 100000, 00:35:02.301 "enable": false 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "bdev_malloc_create", 00:35:02.301 "params": { 00:35:02.301 "name": "malloc0", 00:35:02.301 "num_blocks": 8192, 00:35:02.301 "block_size": 4096, 00:35:02.301 "physical_block_size": 4096, 00:35:02.301 "uuid": "d7dabb79-22d5-431c-a7c4-143062d7f6f9", 00:35:02.301 "optimal_io_boundary": 0, 00:35:02.301 "md_size": 0, 00:35:02.301 "dif_type": 0, 00:35:02.301 "dif_is_head_of_md": false, 00:35:02.301 "dif_pi_format": 0 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "bdev_wait_for_examine" 00:35:02.301 } 00:35:02.301 ] 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "subsystem": "nbd", 00:35:02.301 "config": [] 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "subsystem": "scheduler", 00:35:02.301 "config": [ 00:35:02.301 { 00:35:02.301 "method": "framework_set_scheduler", 00:35:02.301 "params": { 00:35:02.301 "name": "static" 00:35:02.301 } 00:35:02.301 } 00:35:02.301 ] 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "subsystem": "nvmf", 00:35:02.301 "config": [ 00:35:02.301 { 00:35:02.301 "method": "nvmf_set_config", 00:35:02.301 "params": { 00:35:02.301 "discovery_filter": "match_any", 00:35:02.301 "admin_cmd_passthru": { 00:35:02.301 "identify_ctrlr": false 00:35:02.301 } 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "nvmf_set_max_subsystems", 00:35:02.301 "params": { 00:35:02.301 "max_subsystems": 1024 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "nvmf_set_crdt", 00:35:02.301 "params": { 00:35:02.301 "crdt1": 0, 00:35:02.301 "crdt2": 0, 00:35:02.301 "crdt3": 0 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "nvmf_create_transport", 00:35:02.301 "params": { 00:35:02.301 "trtype": "TCP", 00:35:02.301 "max_queue_depth": 128, 00:35:02.301 "max_io_qpairs_per_ctrlr": 127, 00:35:02.301 "in_capsule_data_size": 4096, 00:35:02.301 "max_io_size": 131072, 00:35:02.301 "io_unit_size": 131072, 00:35:02.301 "max_aq_depth": 128, 00:35:02.301 "num_shared_buffers": 511, 00:35:02.301 "buf_cache_size": 4294967295, 00:35:02.301 "dif_insert_or_strip": false, 00:35:02.301 "zcopy": false, 00:35:02.301 "c2h_success": false, 00:35:02.301 "sock_priority": 0, 00:35:02.301 "abort_timeout_sec": 1, 00:35:02.301 "ack_timeout": 0, 00:35:02.301 "data_wr_pool_size": 0 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "nvmf_create_subsystem", 00:35:02.301 "params": { 00:35:02.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.301 "allow_any_host": false, 00:35:02.301 "serial_number": "00000000000000000000", 00:35:02.301 "model_number": "SPDK bdev Controller", 00:35:02.301 "max_namespaces": 32, 00:35:02.301 "min_cntlid": 1, 00:35:02.301 "max_cntlid": 65519, 00:35:02.301 "ana_reporting": false 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "nvmf_subsystem_add_host", 00:35:02.301 "params": { 00:35:02.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.301 "host": "nqn.2016-06.io.spdk:host1", 00:35:02.301 "psk": "key0" 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "nvmf_subsystem_add_ns", 00:35:02.301 "params": { 00:35:02.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.301 "namespace": { 00:35:02.301 "nsid": 1, 00:35:02.301 "bdev_name": "malloc0", 00:35:02.301 "nguid": "D7DABB7922D5431CA7C4143062D7F6F9", 00:35:02.301 "uuid": "d7dabb79-22d5-431c-a7c4-143062d7f6f9", 00:35:02.301 "no_auto_visible": false 00:35:02.301 } 00:35:02.301 } 00:35:02.301 }, 00:35:02.301 { 00:35:02.301 "method": "nvmf_subsystem_add_listener", 00:35:02.301 "params": { 00:35:02.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.301 "listen_address": { 00:35:02.301 "trtype": "TCP", 00:35:02.301 "adrfam": "IPv4", 00:35:02.301 "traddr": "10.0.0.2", 00:35:02.301 "trsvcid": "4420" 00:35:02.301 }, 00:35:02.301 "secure_channel": false, 00:35:02.301 "sock_impl": "ssl" 00:35:02.301 } 00:35:02.301 } 00:35:02.301 ] 00:35:02.301 } 00:35:02.301 ] 00:35:02.301 }' 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2415946 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2415946 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2415946 ']' 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.301 08:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:02.560 [2024-07-23 08:48:14.875802] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:02.560 [2024-07-23 08:48:14.876004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.560 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.818 [2024-07-23 08:48:15.098474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.077 [2024-07-23 08:48:15.574047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.077 [2024-07-23 08:48:15.574176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.077 [2024-07-23 08:48:15.574240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.077 [2024-07-23 08:48:15.574295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.077 [2024-07-23 08:48:15.574378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.077 [2024-07-23 08:48:15.574586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.018 [2024-07-23 08:48:16.486811] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.018 [2024-07-23 08:48:16.522534] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:04.018 [2024-07-23 08:48:16.523048] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.278 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:04.278 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:35:04.278 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:04.278 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:04.278 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2416115 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2416115 /var/tmp/bdevperf.sock 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2416115 ']' 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:04.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:35:04.279 "subsystems": [ 00:35:04.279 { 00:35:04.279 "subsystem": "keyring", 00:35:04.279 "config": [ 00:35:04.279 { 00:35:04.279 "method": "keyring_file_add_key", 00:35:04.279 "params": { 00:35:04.279 "name": "key0", 00:35:04.279 "path": "/tmp/tmp.x8ms3XD6vM" 00:35:04.279 } 00:35:04.279 } 00:35:04.279 ] 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "subsystem": "iobuf", 00:35:04.279 "config": [ 00:35:04.279 { 00:35:04.279 "method": "iobuf_set_options", 00:35:04.279 "params": { 00:35:04.279 "small_pool_count": 8192, 00:35:04.279 "large_pool_count": 1024, 00:35:04.279 "small_bufsize": 8192, 00:35:04.279 "large_bufsize": 135168 00:35:04.279 } 00:35:04.279 } 00:35:04.279 ] 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "subsystem": "sock", 00:35:04.279 "config": [ 00:35:04.279 { 00:35:04.279 "method": "sock_set_default_impl", 00:35:04.279 "params": { 00:35:04.279 "impl_name": "posix" 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "sock_impl_set_options", 00:35:04.279 "params": { 00:35:04.279 "impl_name": "ssl", 00:35:04.279 "recv_buf_size": 4096, 00:35:04.279 "send_buf_size": 4096, 00:35:04.279 "enable_recv_pipe": true, 00:35:04.279 "enable_quickack": false, 00:35:04.279 "enable_placement_id": 0, 00:35:04.279 "enable_zerocopy_send_server": true, 00:35:04.279 "enable_zerocopy_send_client": false, 00:35:04.279 "zerocopy_threshold": 0, 00:35:04.279 "tls_version": 0, 00:35:04.279 "enable_ktls": false 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "sock_impl_set_options", 00:35:04.279 "params": { 00:35:04.279 "impl_name": "posix", 00:35:04.279 "recv_buf_size": 2097152, 00:35:04.279 "send_buf_size": 2097152, 00:35:04.279 "enable_recv_pipe": true, 00:35:04.279 "enable_quickack": false, 00:35:04.279 "enable_placement_id": 0, 00:35:04.279 "enable_zerocopy_send_server": true, 00:35:04.279 "enable_zerocopy_send_client": false, 00:35:04.279 "zerocopy_threshold": 0, 00:35:04.279 "tls_version": 0, 00:35:04.279 "enable_ktls": false 00:35:04.279 } 00:35:04.279 } 00:35:04.279 ] 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "subsystem": "vmd", 00:35:04.279 "config": [] 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "subsystem": "accel", 00:35:04.279 "config": [ 00:35:04.279 { 00:35:04.279 "method": "accel_set_options", 00:35:04.279 "params": { 00:35:04.279 "small_cache_size": 128, 00:35:04.279 "large_cache_size": 16, 00:35:04.279 "task_count": 2048, 00:35:04.279 "sequence_count": 2048, 00:35:04.279 "buf_count": 2048 00:35:04.279 } 00:35:04.279 } 00:35:04.279 ] 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "subsystem": "bdev", 00:35:04.279 "config": [ 00:35:04.279 { 00:35:04.279 "method": "bdev_set_options", 00:35:04.279 "params": { 00:35:04.279 "bdev_io_pool_size": 65535, 00:35:04.279 "bdev_io_cache_size": 256, 00:35:04.279 "bdev_auto_examine": true, 00:35:04.279 "iobuf_small_cache_size": 128, 00:35:04.279 "iobuf_large_cache_size": 16 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "bdev_raid_set_options", 00:35:04.279 "params": { 00:35:04.279 "process_window_size_kb": 1024, 00:35:04.279 "process_max_bandwidth_mb_sec": 0 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "bdev_iscsi_set_options", 00:35:04.279 "params": { 00:35:04.279 "timeout_sec": 30 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "bdev_nvme_set_options", 00:35:04.279 "params": { 00:35:04.279 "action_on_timeout": "none", 00:35:04.279 "timeout_us": 0, 00:35:04.279 "timeout_admin_us": 0, 00:35:04.279 "keep_alive_timeout_ms": 10000, 00:35:04.279 "arbitration_burst": 0, 00:35:04.279 "low_priority_weight": 0, 00:35:04.279 "medium_priority_weight": 0, 00:35:04.279 "high_priority_weight": 0, 00:35:04.279 "nvme_adminq_poll_period_us": 10000, 00:35:04.279 "nvme_ioq_poll_period_us": 0, 00:35:04.279 "io_queue_requests": 512, 00:35:04.279 "delay_cmd_submit": true, 00:35:04.279 "transport_retry_count": 4, 00:35:04.279 "bdev_retry_count": 3, 00:35:04.279 "transport_ack_timeout": 0, 00:35:04.279 "ctrlr_loss_timeout_sec": 0, 00:35:04.279 "reconnect_delay_sec": 0, 00:35:04.279 "fast_io_fail_timeout_sec": 0, 00:35:04.279 "disable_auto_failback": false, 00:35:04.279 "generate_uuids": false, 00:35:04.279 "transport_tos": 0, 00:35:04.279 "nvme_error_stat": false, 00:35:04.279 "rdma_srq_size": 0, 00:35:04.279 "io_path_stat": false, 00:35:04.279 "allow_accel_sequence": false, 00:35:04.279 "rdma_max_cq_size": 0, 00:35:04.279 "rdma_cm_event_timeout_ms": 0, 00:35:04.279 "dhchap_digests": [ 00:35:04.279 "sha256", 00:35:04.279 "sha384", 00:35:04.279 "sha512" 00:35:04.279 ], 00:35:04.279 "dhchap_dhgroups": [ 00:35:04.279 "null", 00:35:04.279 "ffdhe2048", 00:35:04.279 "ffdhe3072", 00:35:04.279 "ffdhe4096", 00:35:04.279 "ffdhe6144", 00:35:04.279 "ffdhe8192" 00:35:04.279 ] 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "bdev_nvme_attach_controller", 00:35:04.279 "params": { 00:35:04.279 "name": "nvme0", 00:35:04.279 "trtype": "TCP", 00:35:04.279 "adrfam": "IPv4", 00:35:04.279 "traddr": "10.0.0.2", 00:35:04.279 "trsvcid": "4420", 00:35:04.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.279 "prchk_reftag": false, 00:35:04.279 "prchk_guard": false, 00:35:04.279 "ctrlr_loss_timeout_sec": 0, 00:35:04.279 "reconnect_delay_sec": 0, 00:35:04.279 "fast_io_fail_timeout_sec": 0, 00:35:04.279 "psk": "key0", 00:35:04.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:04.279 "hdgst": false, 00:35:04.279 "ddgst": false 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "bdev_nvme_set_hotplug", 00:35:04.279 "params": { 00:35:04.279 "period_us": 100000, 00:35:04.279 "enable": false 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "bdev_enable_histogram", 00:35:04.279 "params": { 00:35:04.279 "name": "nvme0n1", 00:35:04.279 "enable": true 00:35:04.279 } 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "method": "bdev_wait_for_examine" 00:35:04.279 } 00:35:04.279 ] 00:35:04.279 }, 00:35:04.279 { 00:35:04.279 "subsystem": "nbd", 00:35:04.279 "config": [] 00:35:04.279 } 00:35:04.279 ] 00:35:04.279 }' 00:35:04.279 08:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:04.279 [2024-07-23 08:48:16.767427] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:04.280 [2024-07-23 08:48:16.767648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416115 ] 00:35:04.538 EAL: No free 2048 kB hugepages reported on node 1 00:35:04.538 [2024-07-23 08:48:17.002896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.798 [2024-07-23 08:48:17.315822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.367 [2024-07-23 08:48:17.825759] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:05.624 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:05.624 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:35:05.624 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:05.624 08:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:35:05.882 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.882 08:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:05.882 Running I/O for 1 seconds... 00:35:07.264 00:35:07.264 Latency(us) 00:35:07.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.264 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:07.264 Verification LBA range: start 0x0 length 0x2000 00:35:07.264 nvme0n1 : 1.03 1913.72 7.48 0.00 0.00 65745.01 13592.65 83886.08 00:35:07.264 =================================================================================================================== 00:35:07.264 Total : 1913.72 7.48 0.00 0.00 65745.01 13592.65 83886.08 00:35:07.264 0 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:07.264 nvmf_trace.0 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2416115 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2416115 ']' 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2416115 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2416115 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2416115' 00:35:07.264 killing process with pid 2416115 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2416115 00:35:07.264 Received shutdown signal, test time was about 1.000000 seconds 00:35:07.264 00:35:07.264 Latency(us) 00:35:07.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.264 =================================================================================================================== 00:35:07.264 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.264 08:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2416115 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:08.646 rmmod nvme_tcp 00:35:08.646 rmmod nvme_fabrics 00:35:08.646 rmmod nvme_keyring 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2415946 ']' 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2415946 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2415946 ']' 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2415946 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:08.646 08:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2415946 00:35:08.646 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:08.646 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:08.646 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2415946' 00:35:08.646 killing process with pid 2415946 00:35:08.646 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2415946 00:35:08.646 08:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2415946 00:35:11.936 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:11.936 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:11.936 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:11.937 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:11.937 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:11.937 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.937 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.937 08:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.FCrleno4db /tmp/tmp.hlimMSb2Ix /tmp/tmp.x8ms3XD6vM 00:35:13.859 00:35:13.859 real 2m25.944s 00:35:13.859 user 4m4.209s 00:35:13.859 sys 0m34.982s 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:13.859 ************************************ 00:35:13.859 END TEST nvmf_tls 00:35:13.859 ************************************ 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:35:13.859 ************************************ 00:35:13.859 START TEST nvmf_fips 00:35:13.859 ************************************ 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:35:13.859 * Looking for test storage... 00:35:13.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.859 08:48:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.859 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:35:13.860 Error setting digest 00:35:13.860 00220CACCA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:35:13.860 00220CACCA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:13.860 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:35:13.861 08:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:17.154 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:17.155 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:17.155 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:17.155 Found net devices under 0000:84:00.0: cvl_0_0 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:17.155 Found net devices under 0000:84:00.1: cvl_0_1 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:17.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:35:17.155 00:35:17.155 --- 10.0.0.2 ping statistics --- 00:35:17.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.155 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:35:17.155 00:35:17.155 --- 10.0.0.1 ping statistics --- 00:35:17.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.155 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2419014 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2419014 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2419014 ']' 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:17.155 08:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:17.416 [2024-07-23 08:48:29.927940] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:17.416 [2024-07-23 08:48:29.928263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.676 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.936 [2024-07-23 08:48:30.210210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.195 [2024-07-23 08:48:30.526679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.195 [2024-07-23 08:48:30.526760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.195 [2024-07-23 08:48:30.526794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.195 [2024-07-23 08:48:30.526820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.195 [2024-07-23 08:48:30.526846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.195 [2024-07-23 08:48:30.526913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:18.766 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:19.335 [2024-07-23 08:48:31.701933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.335 [2024-07-23 08:48:31.717911] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:19.335 [2024-07-23 08:48:31.718222] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.335 [2024-07-23 08:48:31.811439] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:19.335 malloc0 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2419180 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2419180 /var/tmp/bdevperf.sock 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2419180 ']' 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:19.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:19.335 08:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:19.595 [2024-07-23 08:48:32.102330] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:19.595 [2024-07-23 08:48:32.102550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419180 ] 00:35:19.855 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.855 [2024-07-23 08:48:32.338059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.425 [2024-07-23 08:48:32.652660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:21.365 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:21.365 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:35:21.365 08:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:21.624 [2024-07-23 08:48:34.135708] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:21.624 [2024-07-23 08:48:34.135954] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:21.884 TLSTESTn1 00:35:21.884 08:48:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:22.146 Running I/O for 10 seconds... 00:35:32.135 00:35:32.135 Latency(us) 00:35:32.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.135 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:32.135 Verification LBA range: start 0x0 length 0x2000 00:35:32.135 TLSTESTn1 : 10.04 1947.56 7.61 0.00 0.00 65568.99 13204.29 61749.48 00:35:32.135 =================================================================================================================== 00:35:32.135 Total : 1947.56 7.61 0.00 0.00 65568.99 13204.29 61749.48 00:35:32.135 0 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:35:32.135 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:32.135 nvmf_trace.0 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2419180 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2419180 ']' 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2419180 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419180 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419180' 00:35:32.395 killing process with pid 2419180 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2419180 00:35:32.395 Received shutdown signal, test time was about 10.000000 seconds 00:35:32.395 00:35:32.395 Latency(us) 00:35:32.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.395 =================================================================================================================== 00:35:32.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.395 [2024-07-23 08:48:44.772236] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:32.395 08:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2419180 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:33.776 rmmod nvme_tcp 00:35:33.776 rmmod nvme_fabrics 00:35:33.776 rmmod nvme_keyring 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2419014 ']' 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2419014 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2419014 ']' 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2419014 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419014 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419014' 00:35:33.776 killing process with pid 2419014 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2419014 00:35:33.776 [2024-07-23 08:48:46.278123] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:33.776 08:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2419014 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.712 08:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.619 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:37.620 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:35:37.620 00:35:37.620 real 0m24.210s 00:35:37.620 user 0m32.711s 00:35:37.620 sys 0m7.462s 00:35:37.620 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:37.620 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:37.620 ************************************ 00:35:37.620 END TEST nvmf_fips 00:35:37.620 ************************************ 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:35:37.880 ************************************ 00:35:37.880 START TEST nvmf_fuzz 00:35:37.880 ************************************ 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:35:37.880 * Looking for test storage... 00:35:37.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:37.880 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:35:37.881 08:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:41.175 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:41.175 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:41.175 Found net devices under 0000:84:00.0: cvl_0_0 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:41.175 Found net devices under 0000:84:00.1: cvl_0_1 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:41.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:35:41.175 00:35:41.175 --- 10.0.0.2 ping statistics --- 00:35:41.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.175 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:35:41.175 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:35:41.435 00:35:41.435 --- 10.0.0.1 ping statistics --- 00:35:41.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.435 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2422968 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2422968 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2422968 ']' 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:41.435 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.347 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:35:43.607 Malloc0 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:35:43.607 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:36:15.694 Fuzzing completed. Shutting down the fuzz application 00:36:15.694 00:36:15.694 Dumping successful admin opcodes: 00:36:15.694 8, 9, 10, 24, 00:36:15.694 Dumping successful io opcodes: 00:36:15.694 0, 9, 00:36:15.694 NS: 0x200003aefec0 I/O qp, Total commands completed: 263653, total successful commands: 1565, random_seed: 3490724224 00:36:15.694 NS: 0x200003aefec0 admin qp, Total commands completed: 33216, total successful commands: 278, random_seed: 467137088 00:36:15.694 08:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:36:17.602 Fuzzing completed. Shutting down the fuzz application 00:36:17.602 00:36:17.602 Dumping successful admin opcodes: 00:36:17.602 24, 00:36:17.602 Dumping successful io opcodes: 00:36:17.602 00:36:17.602 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3270312398 00:36:17.602 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3270545334 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:17.602 rmmod nvme_tcp 00:36:17.602 rmmod nvme_fabrics 00:36:17.602 rmmod nvme_keyring 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2422968 ']' 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2422968 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2422968 ']' 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 2422968 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2422968 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2422968' 00:36:17.602 killing process with pid 2422968 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 2422968 00:36:17.602 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 2422968 00:36:20.144 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:20.144 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:20.144 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:20.144 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:20.144 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:20.144 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.144 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.145 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:36:22.088 00:36:22.088 real 0m43.961s 00:36:22.088 user 1m1.180s 00:36:22.088 sys 0m15.241s 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:36:22.088 ************************************ 00:36:22.088 END TEST nvmf_fuzz 00:36:22.088 ************************************ 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:22.088 ************************************ 00:36:22.088 START TEST nvmf_multiconnection 00:36:22.088 ************************************ 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:36:22.088 * Looking for test storage... 00:36:22.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.088 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:36:22.089 08:49:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:25.383 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:25.383 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:25.383 Found net devices under 0000:84:00.0: cvl_0_0 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:25.383 Found net devices under 0000:84:00.1: cvl_0_1 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:25.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:25.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:36:25.383 00:36:25.383 --- 10.0.0.2 ping statistics --- 00:36:25.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.383 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:36:25.383 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:25.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:25.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:36:25.384 00:36:25.384 --- 10.0.0.1 ping statistics --- 00:36:25.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:25.384 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2429225 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2429225 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 2429225 ']' 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:25.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:25.384 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:25.384 [2024-07-23 08:49:37.815116] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:25.384 [2024-07-23 08:49:37.815290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:25.643 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.643 [2024-07-23 08:49:38.036548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:26.210 [2024-07-23 08:49:38.498391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:26.210 [2024-07-23 08:49:38.498469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:26.210 [2024-07-23 08:49:38.498503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:26.210 [2024-07-23 08:49:38.498529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:26.210 [2024-07-23 08:49:38.498555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:26.210 [2024-07-23 08:49:38.498701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.210 [2024-07-23 08:49:38.498764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:26.210 [2024-07-23 08:49:38.498817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.210 [2024-07-23 08:49:38.498828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:26.468 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:26.469 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:36:26.469 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:26.469 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:26.469 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.728 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:26.728 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:26.728 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.728 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.728 [2024-07-23 08:49:39.015517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:26.728 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.729 Malloc1 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.729 [2024-07-23 08:49:39.160684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.729 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 Malloc2 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 Malloc3 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:26.987 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:36:26.988 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.988 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 Malloc4 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 Malloc5 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.247 Malloc6 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.247 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.507 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.508 Malloc7 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.508 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.508 Malloc8 00:36:27.508 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.508 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:36:27.508 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.508 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 Malloc9 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 Malloc10 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.768 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:28.028 Malloc11 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:36:28.028 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:28.029 08:49:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:28.598 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:36:28.598 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:28.598 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:28.598 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:28.598 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:31.132 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:31.132 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:31.132 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:36:31.132 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:31.133 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:31.133 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:31.133 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:31.133 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:36:31.391 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:36:31.391 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:31.391 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:31.391 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:31.391 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:33.291 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:33.615 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:33.615 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:36:33.615 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:33.615 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:33.615 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:33.615 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:33.615 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:36:34.197 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:36:34.198 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:34.198 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:34.198 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:34.198 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:36.095 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:36:37.029 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:36:37.029 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:37.029 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:37.029 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:37.029 08:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:38.928 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:38.928 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:38.928 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:36:38.928 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:38.928 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:38.929 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:38.929 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:38.929 08:49:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:36:39.864 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:36:39.864 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:39.864 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:39.864 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:39.864 08:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:41.764 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:36:42.700 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:36:42.700 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:42.700 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:42.700 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:42.700 08:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:44.600 08:49:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:36:45.534 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:36:45.534 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:45.534 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:45.535 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:45.535 08:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:47.432 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:47.432 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:47.432 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:36:47.432 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:47.432 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:47.432 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:47.433 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:47.433 08:49:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:36:47.999 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:36:47.999 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:47.999 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:47.999 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:47.999 08:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:50.529 08:50:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:36:51.094 08:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:36:51.094 08:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:51.094 08:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:51.094 08:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:51.094 08:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:52.992 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:36:53.932 08:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:36:53.932 08:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:53.932 08:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:53.932 08:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:53.932 08:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:36:55.845 08:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:36:56.788 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:36:56.788 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:36:56.788 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:56.788 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:56.788 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:36:58.851 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:58.851 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:58.851 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:36:58.851 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:58.851 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:58.851 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:36:58.851 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:36:58.851 [global] 00:36:58.851 thread=1 00:36:58.851 invalidate=1 00:36:58.851 rw=read 00:36:58.851 time_based=1 00:36:58.851 runtime=10 00:36:58.851 ioengine=libaio 00:36:58.851 direct=1 00:36:58.851 bs=262144 00:36:58.851 iodepth=64 00:36:58.851 norandommap=1 00:36:58.851 numjobs=1 00:36:58.851 00:36:58.851 [job0] 00:36:58.851 filename=/dev/nvme0n1 00:36:58.851 [job1] 00:36:58.851 filename=/dev/nvme10n1 00:36:58.851 [job2] 00:36:58.851 filename=/dev/nvme1n1 00:36:58.851 [job3] 00:36:58.851 filename=/dev/nvme2n1 00:36:58.851 [job4] 00:36:58.851 filename=/dev/nvme3n1 00:36:58.851 [job5] 00:36:58.851 filename=/dev/nvme4n1 00:36:58.851 [job6] 00:36:58.851 filename=/dev/nvme5n1 00:36:58.851 [job7] 00:36:58.851 filename=/dev/nvme6n1 00:36:58.851 [job8] 00:36:58.851 filename=/dev/nvme7n1 00:36:58.851 [job9] 00:36:58.851 filename=/dev/nvme8n1 00:36:58.851 [job10] 00:36:58.851 filename=/dev/nvme9n1 00:36:58.851 Could not set queue depth (nvme0n1) 00:36:58.851 Could not set queue depth (nvme10n1) 00:36:58.851 Could not set queue depth (nvme1n1) 00:36:58.851 Could not set queue depth (nvme2n1) 00:36:58.851 Could not set queue depth (nvme3n1) 00:36:58.851 Could not set queue depth (nvme4n1) 00:36:58.851 Could not set queue depth (nvme5n1) 00:36:58.851 Could not set queue depth (nvme6n1) 00:36:58.851 Could not set queue depth (nvme7n1) 00:36:58.851 Could not set queue depth (nvme8n1) 00:36:58.851 Could not set queue depth (nvme9n1) 00:36:59.109 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:36:59.109 fio-3.35 00:36:59.109 Starting 11 threads 00:37:11.327 00:37:11.327 job0: (groupid=0, jobs=1): err= 0: pid=2433293: Tue Jul 23 08:50:22 2024 00:37:11.327 read: IOPS=454, BW=114MiB/s (119MB/s)(1150MiB/10132msec) 00:37:11.327 slat (usec): min=8, max=130886, avg=1521.40, stdev=6654.49 00:37:11.327 clat (msec): min=2, max=382, avg=139.23, stdev=72.05 00:37:11.327 lat (msec): min=2, max=393, avg=140.75, stdev=72.94 00:37:11.327 clat percentiles (msec): 00:37:11.327 | 1.00th=[ 9], 5.00th=[ 41], 10.00th=[ 50], 20.00th=[ 74], 00:37:11.327 | 30.00th=[ 97], 40.00th=[ 120], 50.00th=[ 130], 60.00th=[ 144], 00:37:11.327 | 70.00th=[ 167], 80.00th=[ 207], 90.00th=[ 249], 95.00th=[ 279], 00:37:11.327 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 326], 99.95th=[ 334], 00:37:11.327 | 99.99th=[ 384] 00:37:11.327 bw ( KiB/s): min=60416, max=273408, per=8.72%, avg=116172.80, stdev=47756.15, samples=20 00:37:11.327 iops : min= 236, max= 1068, avg=453.80, stdev=186.55, samples=20 00:37:11.327 lat (msec) : 4=0.04%, 10=1.24%, 20=1.02%, 50=8.26%, 100=20.32% 00:37:11.327 lat (msec) : 250=59.49%, 500=9.63% 00:37:11.327 cpu : usr=0.25%, sys=1.60%, ctx=722, majf=0, minf=4097 00:37:11.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:37:11.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.327 issued rwts: total=4601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.327 job1: (groupid=0, jobs=1): err= 0: pid=2433294: Tue Jul 23 08:50:22 2024 00:37:11.327 read: IOPS=671, BW=168MiB/s (176MB/s)(1702MiB/10141msec) 00:37:11.327 slat (usec): min=10, max=151231, avg=817.31, stdev=4936.80 00:37:11.327 clat (msec): min=2, max=383, avg=94.39, stdev=79.84 00:37:11.327 lat (msec): min=2, max=429, avg=95.21, stdev=80.45 00:37:11.327 clat percentiles (msec): 00:37:11.327 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 33], 20.00th=[ 41], 00:37:11.327 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 64], 00:37:11.327 | 70.00th=[ 112], 80.00th=[ 167], 90.00th=[ 222], 95.00th=[ 271], 00:37:11.327 | 99.00th=[ 321], 99.50th=[ 342], 99.90th=[ 384], 99.95th=[ 384], 00:37:11.327 | 99.99th=[ 384] 00:37:11.327 bw ( KiB/s): min=58368, max=376832, per=12.96%, avg=172672.00, stdev=108844.44, samples=20 00:37:11.327 iops : min= 228, max= 1472, avg=674.50, stdev=425.17, samples=20 00:37:11.327 lat (msec) : 4=0.06%, 10=1.16%, 20=2.70%, 50=45.26%, 100=17.89% 00:37:11.327 lat (msec) : 250=25.77%, 500=7.15% 00:37:11.327 cpu : usr=0.36%, sys=2.43%, ctx=971, majf=0, minf=4097 00:37:11.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:37:11.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.327 issued rwts: total=6809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.327 job2: (groupid=0, jobs=1): err= 0: pid=2433299: Tue Jul 23 08:50:22 2024 00:37:11.327 read: IOPS=417, BW=104MiB/s (109MB/s)(1057MiB/10132msec) 00:37:11.327 slat (usec): min=13, max=155677, avg=1882.01, stdev=8018.14 00:37:11.327 clat (msec): min=7, max=371, avg=151.35, stdev=64.15 00:37:11.327 lat (msec): min=7, max=407, avg=153.23, stdev=65.18 00:37:11.327 clat percentiles (msec): 00:37:11.327 | 1.00th=[ 16], 5.00th=[ 51], 10.00th=[ 75], 20.00th=[ 96], 00:37:11.327 | 30.00th=[ 116], 40.00th=[ 136], 50.00th=[ 148], 60.00th=[ 163], 00:37:11.327 | 70.00th=[ 180], 80.00th=[ 199], 90.00th=[ 236], 95.00th=[ 271], 00:37:11.327 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 363], 00:37:11.327 | 99.99th=[ 372] 00:37:11.327 bw ( KiB/s): min=55808, max=167424, per=8.00%, avg=106547.20, stdev=30102.03, samples=20 00:37:11.327 iops : min= 218, max= 654, avg=416.20, stdev=117.59, samples=20 00:37:11.327 lat (msec) : 10=0.24%, 20=1.68%, 50=2.77%, 100=17.68%, 250=70.28% 00:37:11.327 lat (msec) : 500=7.36% 00:37:11.327 cpu : usr=0.29%, sys=1.58%, ctx=649, majf=0, minf=4097 00:37:11.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:37:11.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.327 issued rwts: total=4226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.327 job3: (groupid=0, jobs=1): err= 0: pid=2433300: Tue Jul 23 08:50:22 2024 00:37:11.327 read: IOPS=441, BW=110MiB/s (116MB/s)(1119MiB/10144msec) 00:37:11.327 slat (usec): min=10, max=157702, avg=1270.90, stdev=6517.39 00:37:11.327 clat (msec): min=3, max=439, avg=143.58, stdev=73.44 00:37:11.327 lat (msec): min=3, max=439, avg=144.85, stdev=74.27 00:37:11.327 clat percentiles (msec): 00:37:11.327 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 55], 20.00th=[ 86], 00:37:11.327 | 30.00th=[ 102], 40.00th=[ 116], 50.00th=[ 134], 60.00th=[ 150], 00:37:11.327 | 70.00th=[ 176], 80.00th=[ 207], 90.00th=[ 259], 95.00th=[ 284], 00:37:11.327 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 376], 00:37:11.327 | 99.99th=[ 439] 00:37:11.327 bw ( KiB/s): min=51712, max=206336, per=8.47%, avg=112921.60, stdev=36668.29, samples=20 00:37:11.327 iops : min= 202, max= 806, avg=441.10, stdev=143.24, samples=20 00:37:11.327 lat (msec) : 4=0.02%, 10=1.77%, 20=1.79%, 50=5.74%, 100=20.56% 00:37:11.327 lat (msec) : 250=58.65%, 500=11.47% 00:37:11.327 cpu : usr=0.17%, sys=1.72%, ctx=767, majf=0, minf=4097 00:37:11.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:37:11.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.327 issued rwts: total=4474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.327 job4: (groupid=0, jobs=1): err= 0: pid=2433301: Tue Jul 23 08:50:22 2024 00:37:11.327 read: IOPS=426, BW=107MiB/s (112MB/s)(1083MiB/10144msec) 00:37:11.327 slat (usec): min=12, max=159180, avg=1264.45, stdev=6666.71 00:37:11.327 clat (msec): min=4, max=423, avg=148.49, stdev=84.50 00:37:11.327 lat (msec): min=4, max=434, avg=149.75, stdev=85.32 00:37:11.327 clat percentiles (msec): 00:37:11.327 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 72], 00:37:11.327 | 30.00th=[ 97], 40.00th=[ 128], 50.00th=[ 144], 60.00th=[ 163], 00:37:11.327 | 70.00th=[ 188], 80.00th=[ 222], 90.00th=[ 275], 95.00th=[ 296], 00:37:11.327 | 99.00th=[ 342], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 401], 00:37:11.327 | 99.99th=[ 426] 00:37:11.327 bw ( KiB/s): min=54784, max=197632, per=8.20%, avg=109235.20, stdev=37963.77, samples=20 00:37:11.327 iops : min= 214, max= 772, avg=426.70, stdev=148.30, samples=20 00:37:11.327 lat (msec) : 10=1.41%, 20=3.56%, 50=10.62%, 100=15.57%, 250=53.88% 00:37:11.327 lat (msec) : 500=14.97% 00:37:11.327 cpu : usr=0.23%, sys=1.62%, ctx=723, majf=0, minf=4097 00:37:11.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:37:11.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.327 issued rwts: total=4330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.327 job5: (groupid=0, jobs=1): err= 0: pid=2433302: Tue Jul 23 08:50:22 2024 00:37:11.327 read: IOPS=426, BW=107MiB/s (112MB/s)(1077MiB/10099msec) 00:37:11.327 slat (usec): min=10, max=142563, avg=1552.52, stdev=7053.60 00:37:11.327 clat (usec): min=1442, max=459874, avg=148333.26, stdev=87162.07 00:37:11.327 lat (usec): min=1476, max=459890, avg=149885.78, stdev=87982.85 00:37:11.327 clat percentiles (msec): 00:37:11.327 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 78], 00:37:11.328 | 30.00th=[ 107], 40.00th=[ 126], 50.00th=[ 144], 60.00th=[ 167], 00:37:11.328 | 70.00th=[ 184], 80.00th=[ 220], 90.00th=[ 275], 95.00th=[ 296], 00:37:11.328 | 99.00th=[ 342], 99.50th=[ 447], 99.90th=[ 451], 99.95th=[ 451], 00:37:11.328 | 99.99th=[ 460] 00:37:11.328 bw ( KiB/s): min=66048, max=166400, per=8.15%, avg=108646.40, stdev=22058.64, samples=20 00:37:11.328 iops : min= 258, max= 650, avg=424.40, stdev=86.17, samples=20 00:37:11.328 lat (msec) : 2=0.14%, 4=1.44%, 10=2.88%, 20=5.11%, 50=6.08% 00:37:11.328 lat (msec) : 100=11.84%, 250=58.16%, 500=14.35% 00:37:11.328 cpu : usr=0.26%, sys=1.52%, ctx=757, majf=0, minf=4097 00:37:11.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:37:11.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.328 issued rwts: total=4307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.328 job6: (groupid=0, jobs=1): err= 0: pid=2433303: Tue Jul 23 08:50:22 2024 00:37:11.328 read: IOPS=396, BW=99.0MiB/s (104MB/s)(1005MiB/10143msec) 00:37:11.328 slat (usec): min=11, max=213903, avg=1544.50, stdev=8047.75 00:37:11.328 clat (msec): min=2, max=447, avg=159.84, stdev=78.53 00:37:11.328 lat (msec): min=2, max=447, avg=161.38, stdev=79.72 00:37:11.328 clat percentiles (msec): 00:37:11.328 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 58], 20.00th=[ 99], 00:37:11.328 | 30.00th=[ 123], 40.00th=[ 133], 50.00th=[ 144], 60.00th=[ 167], 00:37:11.328 | 70.00th=[ 205], 80.00th=[ 230], 90.00th=[ 275], 95.00th=[ 296], 00:37:11.328 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 368], 00:37:11.328 | 99.99th=[ 447] 00:37:11.328 bw ( KiB/s): min=51200, max=162304, per=7.60%, avg=101228.80, stdev=32271.16, samples=20 00:37:11.328 iops : min= 200, max= 634, avg=395.40, stdev=126.09, samples=20 00:37:11.328 lat (msec) : 4=0.05%, 10=1.47%, 20=3.19%, 50=3.73%, 100=12.05% 00:37:11.328 lat (msec) : 250=63.91%, 500=15.60% 00:37:11.328 cpu : usr=0.28%, sys=1.50%, ctx=784, majf=0, minf=4097 00:37:11.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:37:11.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.328 issued rwts: total=4018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.328 job7: (groupid=0, jobs=1): err= 0: pid=2433304: Tue Jul 23 08:50:22 2024 00:37:11.328 read: IOPS=510, BW=128MiB/s (134MB/s)(1294MiB/10135msec) 00:37:11.328 slat (usec): min=10, max=139629, avg=1169.09, stdev=6488.43 00:37:11.328 clat (usec): min=1973, max=395826, avg=123982.52, stdev=74155.12 00:37:11.328 lat (usec): min=1998, max=427988, avg=125151.61, stdev=75114.25 00:37:11.328 clat percentiles (msec): 00:37:11.328 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 25], 20.00th=[ 49], 00:37:11.328 | 30.00th=[ 75], 40.00th=[ 101], 50.00th=[ 125], 60.00th=[ 150], 00:37:11.328 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 213], 95.00th=[ 241], 00:37:11.328 | 99.00th=[ 338], 99.50th=[ 372], 99.90th=[ 393], 99.95th=[ 393], 00:37:11.328 | 99.99th=[ 397] 00:37:11.328 bw ( KiB/s): min=48128, max=242688, per=9.82%, avg=130892.80, stdev=49715.31, samples=20 00:37:11.328 iops : min= 188, max= 948, avg=511.30, stdev=194.20, samples=20 00:37:11.328 lat (msec) : 2=0.02%, 4=0.23%, 10=2.65%, 20=4.29%, 50=13.37% 00:37:11.328 lat (msec) : 100=19.01%, 250=56.41%, 500=4.02% 00:37:11.328 cpu : usr=0.27%, sys=1.85%, ctx=808, majf=0, minf=4097 00:37:11.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:11.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.328 issued rwts: total=5176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.328 job8: (groupid=0, jobs=1): err= 0: pid=2433305: Tue Jul 23 08:50:22 2024 00:37:11.328 read: IOPS=516, BW=129MiB/s (135MB/s)(1297MiB/10054msec) 00:37:11.328 slat (usec): min=10, max=160177, avg=1223.56, stdev=5460.06 00:37:11.328 clat (usec): min=1372, max=399899, avg=122653.24, stdev=64597.47 00:37:11.328 lat (usec): min=1420, max=399922, avg=123876.80, stdev=65137.69 00:37:11.328 clat percentiles (msec): 00:37:11.328 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 44], 20.00th=[ 71], 00:37:11.328 | 30.00th=[ 88], 40.00th=[ 106], 50.00th=[ 122], 60.00th=[ 130], 00:37:11.328 | 70.00th=[ 146], 80.00th=[ 169], 90.00th=[ 205], 95.00th=[ 245], 00:37:11.328 | 99.00th=[ 300], 99.50th=[ 347], 99.90th=[ 393], 99.95th=[ 401], 00:37:11.328 | 99.99th=[ 401] 00:37:11.328 bw ( KiB/s): min=64512, max=230400, per=9.85%, avg=131211.55, stdev=34815.73, samples=20 00:37:11.328 iops : min= 252, max= 900, avg=512.50, stdev=136.02, samples=20 00:37:11.328 lat (msec) : 2=0.04%, 4=0.37%, 10=2.64%, 20=1.83%, 50=7.80% 00:37:11.328 lat (msec) : 100=23.74%, 250=58.97%, 500=4.61% 00:37:11.328 cpu : usr=0.26%, sys=1.75%, ctx=890, majf=0, minf=4097 00:37:11.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:11.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.328 issued rwts: total=5189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.328 job9: (groupid=0, jobs=1): err= 0: pid=2433306: Tue Jul 23 08:50:22 2024 00:37:11.328 read: IOPS=496, BW=124MiB/s (130MB/s)(1260MiB/10144msec) 00:37:11.328 slat (usec): min=9, max=239656, avg=1172.85, stdev=6681.44 00:37:11.328 clat (msec): min=2, max=401, avg=127.50, stdev=78.40 00:37:11.328 lat (msec): min=2, max=508, avg=128.67, stdev=79.25 00:37:11.328 clat percentiles (msec): 00:37:11.328 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 45], 00:37:11.328 | 30.00th=[ 81], 40.00th=[ 103], 50.00th=[ 122], 60.00th=[ 146], 00:37:11.328 | 70.00th=[ 180], 80.00th=[ 197], 90.00th=[ 234], 95.00th=[ 266], 00:37:11.328 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 330], 99.95th=[ 330], 00:37:11.328 | 99.99th=[ 401] 00:37:11.328 bw ( KiB/s): min=62976, max=211968, per=9.56%, avg=127385.60, stdev=45282.22, samples=20 00:37:11.328 iops : min= 246, max= 828, avg=497.60, stdev=176.88, samples=20 00:37:11.328 lat (msec) : 4=0.30%, 10=1.49%, 20=7.98%, 50=12.11%, 100=17.48% 00:37:11.328 lat (msec) : 250=53.07%, 500=7.58% 00:37:11.328 cpu : usr=0.28%, sys=1.81%, ctx=844, majf=0, minf=3721 00:37:11.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:37:11.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.328 issued rwts: total=5039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.328 job10: (groupid=0, jobs=1): err= 0: pid=2433313: Tue Jul 23 08:50:22 2024 00:37:11.328 read: IOPS=456, BW=114MiB/s (120MB/s)(1158MiB/10138msec) 00:37:11.328 slat (usec): min=13, max=123926, avg=1778.57, stdev=7012.50 00:37:11.328 clat (msec): min=7, max=391, avg=138.10, stdev=82.11 00:37:11.328 lat (msec): min=7, max=391, avg=139.88, stdev=83.08 00:37:11.328 clat percentiles (msec): 00:37:11.328 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 53], 00:37:11.328 | 30.00th=[ 69], 40.00th=[ 103], 50.00th=[ 132], 60.00th=[ 165], 00:37:11.328 | 70.00th=[ 182], 80.00th=[ 213], 90.00th=[ 264], 95.00th=[ 284], 00:37:11.328 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 359], 00:37:11.328 | 99.99th=[ 393] 00:37:11.328 bw ( KiB/s): min=57856, max=330240, per=8.78%, avg=116940.80, stdev=69277.49, samples=20 00:37:11.328 iops : min= 226, max= 1290, avg=456.80, stdev=270.62, samples=20 00:37:11.328 lat (msec) : 10=0.52%, 20=1.51%, 50=16.02%, 100=21.44%, 250=47.65% 00:37:11.328 lat (msec) : 500=12.87% 00:37:11.328 cpu : usr=0.37%, sys=1.43%, ctx=689, majf=0, minf=4097 00:37:11.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:37:11.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:11.328 issued rwts: total=4632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:11.328 00:37:11.328 Run status group 0 (all jobs): 00:37:11.328 READ: bw=1301MiB/s (1364MB/s), 99.0MiB/s-168MiB/s (104MB/s-176MB/s), io=12.9GiB (13.8GB), run=10054-10144msec 00:37:11.328 00:37:11.328 Disk stats (read/write): 00:37:11.328 nvme0n1: ios=9099/0, merge=0/0, ticks=1252609/0, in_queue=1252609, util=95.83% 00:37:11.328 nvme10n1: ios=13491/0, merge=0/0, ticks=1244474/0, in_queue=1244474, util=96.02% 00:37:11.328 nvme1n1: ios=8340/0, merge=0/0, ticks=1245378/0, in_queue=1245378, util=96.35% 00:37:11.328 nvme2n1: ios=8898/0, merge=0/0, ticks=1257970/0, in_queue=1257970, util=96.47% 00:37:11.328 nvme3n1: ios=8574/0, merge=0/0, ticks=1252781/0, in_queue=1252781, util=96.61% 00:37:11.328 nvme4n1: ios=8524/0, merge=0/0, ticks=1254885/0, in_queue=1254885, util=97.00% 00:37:11.328 nvme5n1: ios=7910/0, merge=0/0, ticks=1248734/0, in_queue=1248734, util=97.18% 00:37:11.328 nvme6n1: ios=10246/0, merge=0/0, ticks=1246454/0, in_queue=1246454, util=97.32% 00:37:11.328 nvme7n1: ios=10369/0, merge=0/0, ticks=1260315/0, in_queue=1260315, util=98.38% 00:37:11.328 nvme8n1: ios=9972/0, merge=0/0, ticks=1247537/0, in_queue=1247537, util=98.88% 00:37:11.329 nvme9n1: ios=9143/0, merge=0/0, ticks=1238399/0, in_queue=1238399, util=99.24% 00:37:11.329 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:37:11.329 [global] 00:37:11.329 thread=1 00:37:11.329 invalidate=1 00:37:11.329 rw=randwrite 00:37:11.329 time_based=1 00:37:11.329 runtime=10 00:37:11.329 ioengine=libaio 00:37:11.329 direct=1 00:37:11.329 bs=262144 00:37:11.329 iodepth=64 00:37:11.329 norandommap=1 00:37:11.329 numjobs=1 00:37:11.329 00:37:11.329 [job0] 00:37:11.329 filename=/dev/nvme0n1 00:37:11.329 [job1] 00:37:11.329 filename=/dev/nvme10n1 00:37:11.329 [job2] 00:37:11.329 filename=/dev/nvme1n1 00:37:11.329 [job3] 00:37:11.329 filename=/dev/nvme2n1 00:37:11.329 [job4] 00:37:11.329 filename=/dev/nvme3n1 00:37:11.329 [job5] 00:37:11.329 filename=/dev/nvme4n1 00:37:11.329 [job6] 00:37:11.329 filename=/dev/nvme5n1 00:37:11.329 [job7] 00:37:11.329 filename=/dev/nvme6n1 00:37:11.329 [job8] 00:37:11.329 filename=/dev/nvme7n1 00:37:11.329 [job9] 00:37:11.329 filename=/dev/nvme8n1 00:37:11.329 [job10] 00:37:11.329 filename=/dev/nvme9n1 00:37:11.329 Could not set queue depth (nvme0n1) 00:37:11.329 Could not set queue depth (nvme10n1) 00:37:11.329 Could not set queue depth (nvme1n1) 00:37:11.329 Could not set queue depth (nvme2n1) 00:37:11.329 Could not set queue depth (nvme3n1) 00:37:11.329 Could not set queue depth (nvme4n1) 00:37:11.329 Could not set queue depth (nvme5n1) 00:37:11.329 Could not set queue depth (nvme6n1) 00:37:11.329 Could not set queue depth (nvme7n1) 00:37:11.329 Could not set queue depth (nvme8n1) 00:37:11.329 Could not set queue depth (nvme9n1) 00:37:11.329 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:37:11.329 fio-3.35 00:37:11.329 Starting 11 threads 00:37:21.327 00:37:21.327 job0: (groupid=0, jobs=1): err= 0: pid=2434321: Tue Jul 23 08:50:33 2024 00:37:21.327 write: IOPS=315, BW=79.0MiB/s (82.8MB/s)(798MiB/10108msec); 0 zone resets 00:37:21.327 slat (usec): min=27, max=161698, avg=1789.36, stdev=7097.07 00:37:21.327 clat (usec): min=1358, max=522941, avg=200651.64, stdev=130582.51 00:37:21.327 lat (usec): min=1462, max=523001, avg=202441.00, stdev=131874.01 00:37:21.327 clat percentiles (msec): 00:37:21.327 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 53], 20.00th=[ 73], 00:37:21.327 | 30.00th=[ 104], 40.00th=[ 132], 50.00th=[ 161], 60.00th=[ 222], 00:37:21.327 | 70.00th=[ 288], 80.00th=[ 347], 90.00th=[ 397], 95.00th=[ 422], 00:37:21.327 | 99.00th=[ 464], 99.50th=[ 481], 99.90th=[ 506], 99.95th=[ 514], 00:37:21.327 | 99.99th=[ 523] 00:37:21.327 bw ( KiB/s): min=37888, max=178176, per=8.83%, avg=80128.00, stdev=45265.12, samples=20 00:37:21.327 iops : min= 148, max= 696, avg=313.00, stdev=176.82, samples=20 00:37:21.327 lat (msec) : 2=0.34%, 4=0.47%, 10=1.63%, 20=1.63%, 50=5.20% 00:37:21.327 lat (msec) : 100=19.23%, 250=34.20%, 500=37.14%, 750=0.16% 00:37:21.327 cpu : usr=1.23%, sys=1.34%, ctx=2051, majf=0, minf=1 00:37:21.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:37:21.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.327 issued rwts: total=0,3193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.327 job1: (groupid=0, jobs=1): err= 0: pid=2434322: Tue Jul 23 08:50:33 2024 00:37:21.327 write: IOPS=348, BW=87.0MiB/s (91.2MB/s)(894MiB/10275msec); 0 zone resets 00:37:21.327 slat (usec): min=26, max=112666, avg=1479.96, stdev=5805.50 00:37:21.327 clat (usec): min=1688, max=578018, avg=182259.70, stdev=132686.94 00:37:21.327 lat (usec): min=1773, max=578072, avg=183739.66, stdev=133806.58 00:37:21.327 clat percentiles (msec): 00:37:21.327 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 64], 00:37:21.327 | 30.00th=[ 93], 40.00th=[ 126], 50.00th=[ 159], 60.00th=[ 199], 00:37:21.327 | 70.00th=[ 239], 80.00th=[ 300], 90.00th=[ 397], 95.00th=[ 430], 00:37:21.327 | 99.00th=[ 510], 99.50th=[ 527], 99.90th=[ 558], 99.95th=[ 575], 00:37:21.327 | 99.99th=[ 575] 00:37:21.327 bw ( KiB/s): min=41984, max=242176, per=9.91%, avg=89907.20, stdev=44622.57, samples=20 00:37:21.327 iops : min= 164, max= 946, avg=351.20, stdev=174.31, samples=20 00:37:21.327 lat (msec) : 2=0.36%, 4=1.06%, 10=4.45%, 20=6.07%, 50=5.54% 00:37:21.327 lat (msec) : 100=15.46%, 250=40.24%, 500=25.42%, 750=1.40% 00:37:21.327 cpu : usr=1.42%, sys=1.58%, ctx=2426, majf=0, minf=1 00:37:21.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:37:21.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.327 issued rwts: total=0,3576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.327 job2: (groupid=0, jobs=1): err= 0: pid=2434323: Tue Jul 23 08:50:33 2024 00:37:21.327 write: IOPS=310, BW=77.7MiB/s (81.5MB/s)(798MiB/10272msec); 0 zone resets 00:37:21.327 slat (usec): min=25, max=159990, avg=1757.74, stdev=6588.81 00:37:21.327 clat (usec): min=1486, max=596863, avg=203856.72, stdev=148518.01 00:37:21.327 lat (usec): min=1560, max=596914, avg=205614.46, stdev=150085.79 00:37:21.327 clat percentiles (msec): 00:37:21.327 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 13], 20.00th=[ 29], 00:37:21.327 | 30.00th=[ 64], 40.00th=[ 142], 50.00th=[ 207], 60.00th=[ 279], 00:37:21.327 | 70.00th=[ 317], 80.00th=[ 351], 90.00th=[ 393], 95.00th=[ 430], 00:37:21.327 | 99.00th=[ 477], 99.50th=[ 510], 99.90th=[ 575], 99.95th=[ 600], 00:37:21.327 | 99.99th=[ 600] 00:37:21.327 bw ( KiB/s): min=34816, max=162629, per=8.83%, avg=80093.05, stdev=37140.13, samples=20 00:37:21.327 iops : min= 136, max= 635, avg=312.85, stdev=145.05, samples=20 00:37:21.327 lat (msec) : 2=0.56%, 4=2.16%, 10=5.20%, 20=6.48%, 50=12.88% 00:37:21.327 lat (msec) : 100=7.08%, 250=20.14%, 500=44.83%, 750=0.66% 00:37:21.327 cpu : usr=1.19%, sys=1.57%, ctx=2283, majf=0, minf=1 00:37:21.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:37:21.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.327 issued rwts: total=0,3192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.327 job3: (groupid=0, jobs=1): err= 0: pid=2434335: Tue Jul 23 08:50:33 2024 00:37:21.327 write: IOPS=308, BW=77.2MiB/s (81.0MB/s)(781MiB/10104msec); 0 zone resets 00:37:21.327 slat (usec): min=22, max=228999, avg=1602.46, stdev=6887.16 00:37:21.327 clat (usec): min=1789, max=546759, avg=205319.43, stdev=139638.98 00:37:21.327 lat (usec): min=1862, max=605165, avg=206921.89, stdev=141109.55 00:37:21.327 clat percentiles (msec): 00:37:21.327 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 30], 20.00th=[ 52], 00:37:21.327 | 30.00th=[ 88], 40.00th=[ 148], 50.00th=[ 205], 60.00th=[ 271], 00:37:21.327 | 70.00th=[ 300], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 430], 00:37:21.327 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:37:21.327 | 99.99th=[ 550] 00:37:21.327 bw ( KiB/s): min=28672, max=138240, per=8.63%, avg=78315.10, stdev=32243.08, samples=20 00:37:21.327 iops : min= 112, max= 540, avg=305.90, stdev=125.97, samples=20 00:37:21.327 lat (msec) : 2=0.06%, 4=0.45%, 10=2.11%, 20=3.97%, 50=13.00% 00:37:21.327 lat (msec) : 100=12.20%, 250=24.34%, 500=42.31%, 750=1.54% 00:37:21.327 cpu : usr=1.17%, sys=1.57%, ctx=2324, majf=0, minf=1 00:37:21.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:37:21.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.327 issued rwts: total=0,3122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.327 job4: (groupid=0, jobs=1): err= 0: pid=2434336: Tue Jul 23 08:50:33 2024 00:37:21.327 write: IOPS=323, BW=80.8MiB/s (84.7MB/s)(813MiB/10066msec); 0 zone resets 00:37:21.327 slat (usec): min=29, max=211409, avg=2668.04, stdev=8029.19 00:37:21.327 clat (msec): min=5, max=648, avg=195.31, stdev=146.90 00:37:21.327 lat (msec): min=5, max=648, avg=197.98, stdev=148.89 00:37:21.327 clat percentiles (msec): 00:37:21.327 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 66], 20.00th=[ 70], 00:37:21.327 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 122], 60.00th=[ 249], 00:37:21.327 | 70.00th=[ 300], 80.00th=[ 347], 90.00th=[ 401], 95.00th=[ 447], 00:37:21.327 | 99.00th=[ 567], 99.50th=[ 600], 99.90th=[ 625], 99.95th=[ 651], 00:37:21.327 | 99.99th=[ 651] 00:37:21.327 bw ( KiB/s): min=34816, max=229888, per=9.00%, avg=81647.80, stdev=59338.21, samples=20 00:37:21.327 iops : min= 136, max= 898, avg=318.90, stdev=231.78, samples=20 00:37:21.327 lat (msec) : 10=0.28%, 20=1.45%, 50=6.67%, 100=38.01%, 250=13.68% 00:37:21.327 lat (msec) : 500=37.79%, 750=2.12% 00:37:21.327 cpu : usr=1.26%, sys=1.06%, ctx=1363, majf=0, minf=1 00:37:21.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:37:21.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.327 issued rwts: total=0,3252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.328 job5: (groupid=0, jobs=1): err= 0: pid=2434337: Tue Jul 23 08:50:33 2024 00:37:21.328 write: IOPS=276, BW=69.0MiB/s (72.4MB/s)(709MiB/10267msec); 0 zone resets 00:37:21.328 slat (usec): min=22, max=148070, avg=2184.49, stdev=7841.13 00:37:21.328 clat (msec): min=2, max=599, avg=229.49, stdev=144.57 00:37:21.328 lat (msec): min=2, max=599, avg=231.67, stdev=146.19 00:37:21.328 clat percentiles (msec): 00:37:21.328 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 67], 00:37:21.328 | 30.00th=[ 127], 40.00th=[ 186], 50.00th=[ 243], 60.00th=[ 284], 00:37:21.328 | 70.00th=[ 330], 80.00th=[ 359], 90.00th=[ 414], 95.00th=[ 472], 00:37:21.328 | 99.00th=[ 527], 99.50th=[ 550], 99.90th=[ 575], 99.95th=[ 600], 00:37:21.328 | 99.99th=[ 600] 00:37:21.328 bw ( KiB/s): min=37376, max=120320, per=7.82%, avg=70942.20, stdev=27358.43, samples=20 00:37:21.328 iops : min= 146, max= 470, avg=277.10, stdev=106.89, samples=20 00:37:21.328 lat (msec) : 4=0.11%, 10=1.91%, 20=4.73%, 50=9.39%, 100=8.65% 00:37:21.328 lat (msec) : 250=27.73%, 500=44.81%, 750=2.68% 00:37:21.328 cpu : usr=1.01%, sys=1.17%, ctx=1838, majf=0, minf=1 00:37:21.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:37:21.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.328 issued rwts: total=0,2834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.328 job6: (groupid=0, jobs=1): err= 0: pid=2434338: Tue Jul 23 08:50:33 2024 00:37:21.328 write: IOPS=293, BW=73.4MiB/s (76.9MB/s)(754MiB/10274msec); 0 zone resets 00:37:21.328 slat (usec): min=19, max=204054, avg=2145.81, stdev=7929.18 00:37:21.328 clat (usec): min=1534, max=620875, avg=215763.24, stdev=145928.62 00:37:21.328 lat (usec): min=1596, max=620943, avg=217909.05, stdev=147778.93 00:37:21.328 clat percentiles (msec): 00:37:21.328 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 41], 00:37:21.328 | 30.00th=[ 87], 40.00th=[ 167], 50.00th=[ 249], 60.00th=[ 292], 00:37:21.328 | 70.00th=[ 317], 80.00th=[ 347], 90.00th=[ 393], 95.00th=[ 418], 00:37:21.328 | 99.00th=[ 518], 99.50th=[ 550], 99.90th=[ 600], 99.95th=[ 625], 00:37:21.328 | 99.99th=[ 625] 00:37:21.328 bw ( KiB/s): min=45056, max=128000, per=8.33%, avg=75550.55, stdev=28723.79, samples=20 00:37:21.328 iops : min= 176, max= 500, avg=295.10, stdev=112.22, samples=20 00:37:21.328 lat (msec) : 2=0.30%, 4=0.96%, 10=4.05%, 20=5.97%, 50=10.25% 00:37:21.328 lat (msec) : 100=10.85%, 250=17.71%, 500=48.39%, 750=1.53% 00:37:21.328 cpu : usr=1.23%, sys=1.48%, ctx=2034, majf=0, minf=1 00:37:21.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:37:21.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.328 issued rwts: total=0,3015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.328 job7: (groupid=0, jobs=1): err= 0: pid=2434339: Tue Jul 23 08:50:33 2024 00:37:21.328 write: IOPS=316, BW=79.2MiB/s (83.1MB/s)(814MiB/10274msec); 0 zone resets 00:37:21.328 slat (usec): min=26, max=251999, avg=1421.06, stdev=7534.95 00:37:21.328 clat (usec): min=1564, max=626660, avg=200198.17, stdev=128884.43 00:37:21.328 lat (usec): min=1739, max=626713, avg=201619.22, stdev=130006.75 00:37:21.328 clat percentiles (msec): 00:37:21.328 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 32], 20.00th=[ 63], 00:37:21.328 | 30.00th=[ 109], 40.00th=[ 163], 50.00th=[ 199], 60.00th=[ 241], 00:37:21.328 | 70.00th=[ 279], 80.00th=[ 309], 90.00th=[ 359], 95.00th=[ 418], 00:37:21.328 | 99.00th=[ 518], 99.50th=[ 542], 99.90th=[ 609], 99.95th=[ 625], 00:37:21.328 | 99.99th=[ 625] 00:37:21.328 bw ( KiB/s): min=43008, max=182784, per=9.01%, avg=81752.05, stdev=34242.97, samples=20 00:37:21.328 iops : min= 168, max= 714, avg=319.30, stdev=133.72, samples=20 00:37:21.328 lat (msec) : 2=0.18%, 4=0.40%, 10=1.54%, 20=3.38%, 50=10.84% 00:37:21.328 lat (msec) : 100=12.01%, 250=34.64%, 500=35.20%, 750=1.81% 00:37:21.328 cpu : usr=1.18%, sys=1.43%, ctx=2397, majf=0, minf=1 00:37:21.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:37:21.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.328 issued rwts: total=0,3256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.328 job8: (groupid=0, jobs=1): err= 0: pid=2434340: Tue Jul 23 08:50:33 2024 00:37:21.328 write: IOPS=371, BW=92.8MiB/s (97.3MB/s)(944MiB/10170msec); 0 zone resets 00:37:21.328 slat (usec): min=25, max=92758, avg=2083.71, stdev=5544.43 00:37:21.328 clat (usec): min=1472, max=473472, avg=170172.43, stdev=113161.64 00:37:21.328 lat (usec): min=1506, max=476366, avg=172256.14, stdev=114651.10 00:37:21.328 clat percentiles (msec): 00:37:21.328 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 67], 00:37:21.328 | 30.00th=[ 72], 40.00th=[ 114], 50.00th=[ 159], 60.00th=[ 203], 00:37:21.328 | 70.00th=[ 241], 80.00th=[ 284], 90.00th=[ 321], 95.00th=[ 363], 00:37:21.328 | 99.00th=[ 447], 99.50th=[ 460], 99.90th=[ 468], 99.95th=[ 472], 00:37:21.328 | 99.99th=[ 472] 00:37:21.328 bw ( KiB/s): min=43008, max=217088, per=10.48%, avg=95033.05, stdev=52546.71, samples=20 00:37:21.328 iops : min= 168, max= 848, avg=371.20, stdev=205.28, samples=20 00:37:21.328 lat (msec) : 2=0.29%, 4=0.77%, 10=0.98%, 20=3.31%, 50=10.51% 00:37:21.328 lat (msec) : 100=21.29%, 250=35.30%, 500=27.54% 00:37:21.328 cpu : usr=1.52%, sys=1.17%, ctx=1930, majf=0, minf=1 00:37:21.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:37:21.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.328 issued rwts: total=0,3776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.328 job9: (groupid=0, jobs=1): err= 0: pid=2434341: Tue Jul 23 08:50:33 2024 00:37:21.328 write: IOPS=363, BW=91.0MiB/s (95.4MB/s)(925MiB/10165msec); 0 zone resets 00:37:21.328 slat (usec): min=24, max=123098, avg=2141.14, stdev=5703.94 00:37:21.328 clat (usec): min=1778, max=536968, avg=173573.75, stdev=105791.04 00:37:21.328 lat (usec): min=1887, max=537018, avg=175714.88, stdev=106809.88 00:37:21.328 clat percentiles (msec): 00:37:21.328 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 56], 20.00th=[ 79], 00:37:21.328 | 30.00th=[ 91], 40.00th=[ 136], 50.00th=[ 163], 60.00th=[ 182], 00:37:21.328 | 70.00th=[ 211], 80.00th=[ 288], 90.00th=[ 317], 95.00th=[ 338], 00:37:21.328 | 99.00th=[ 472], 99.50th=[ 510], 99.90th=[ 531], 99.95th=[ 535], 00:37:21.328 | 99.99th=[ 542] 00:37:21.328 bw ( KiB/s): min=35840, max=194560, per=10.27%, avg=93115.35, stdev=46739.40, samples=20 00:37:21.328 iops : min= 140, max= 760, avg=363.70, stdev=182.58, samples=20 00:37:21.328 lat (msec) : 2=0.03%, 4=0.54%, 10=1.30%, 20=1.11%, 50=6.05% 00:37:21.328 lat (msec) : 100=23.68%, 250=42.86%, 500=23.84%, 750=0.59% 00:37:21.328 cpu : usr=1.30%, sys=1.50%, ctx=1699, majf=0, minf=1 00:37:21.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:37:21.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.328 issued rwts: total=0,3700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.328 job10: (groupid=0, jobs=1): err= 0: pid=2434343: Tue Jul 23 08:50:33 2024 00:37:21.328 write: IOPS=342, BW=85.7MiB/s (89.9MB/s)(872MiB/10164msec); 0 zone resets 00:37:21.328 slat (usec): min=24, max=124078, avg=1557.01, stdev=6274.27 00:37:21.328 clat (usec): min=1487, max=504196, avg=184904.70, stdev=131235.40 00:37:21.328 lat (usec): min=1544, max=504308, avg=186461.71, stdev=132801.81 00:37:21.328 clat percentiles (msec): 00:37:21.328 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 49], 00:37:21.328 | 30.00th=[ 92], 40.00th=[ 136], 50.00th=[ 169], 60.00th=[ 203], 00:37:21.328 | 70.00th=[ 262], 80.00th=[ 309], 90.00th=[ 384], 95.00th=[ 426], 00:37:21.328 | 99.00th=[ 485], 99.50th=[ 489], 99.90th=[ 502], 99.95th=[ 502], 00:37:21.328 | 99.99th=[ 506] 00:37:21.328 bw ( KiB/s): min=32768, max=138240, per=9.66%, avg=87628.80, stdev=36058.10, samples=20 00:37:21.328 iops : min= 128, max= 540, avg=342.30, stdev=140.85, samples=20 00:37:21.328 lat (msec) : 2=0.43%, 4=1.49%, 10=3.47%, 20=4.48%, 50=11.16% 00:37:21.328 lat (msec) : 100=10.44%, 250=36.49%, 500=32.01%, 750=0.03% 00:37:21.328 cpu : usr=1.36%, sys=1.45%, ctx=2502, majf=0, minf=1 00:37:21.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:37:21.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:37:21.328 issued rwts: total=0,3486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:37:21.328 00:37:21.328 Run status group 0 (all jobs): 00:37:21.328 WRITE: bw=886MiB/s (929MB/s), 69.0MiB/s-92.8MiB/s (72.4MB/s-97.3MB/s), io=9101MiB (9543MB), run=10066-10275msec 00:37:21.328 00:37:21.328 Disk stats (read/write): 00:37:21.328 nvme0n1: ios=47/6158, merge=0/0, ticks=2763/1219341, in_queue=1222104, util=99.64% 00:37:21.328 nvme10n1: ios=49/7088, merge=0/0, ticks=97/1242544, in_queue=1242641, util=95.42% 00:37:21.328 nvme1n1: ios=44/6324, merge=0/0, ticks=890/1243276, in_queue=1244166, util=100.00% 00:37:21.328 nvme2n1: ios=43/5987, merge=0/0, ticks=1165/1214868, in_queue=1216033, util=100.00% 00:37:21.328 nvme3n1: ios=0/6211, merge=0/0, ticks=0/1211733, in_queue=1211733, util=95.52% 00:37:21.328 nvme4n1: ios=0/5611, merge=0/0, ticks=0/1241281, in_queue=1241281, util=96.50% 00:37:21.328 nvme5n1: ios=0/5968, merge=0/0, ticks=0/1238646, in_queue=1238646, util=96.93% 00:37:21.328 nvme6n1: ios=45/6450, merge=0/0, ticks=2925/1227771, in_queue=1230696, util=99.90% 00:37:21.328 nvme7n1: ios=0/7536, merge=0/0, ticks=0/1241389, in_queue=1241389, util=98.28% 00:37:21.328 nvme8n1: ios=0/7390, merge=0/0, ticks=0/1241904, in_queue=1241904, util=98.76% 00:37:21.328 nvme9n1: ios=0/6954, merge=0/0, ticks=0/1252679, in_queue=1252679, util=99.03% 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:21.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:21.329 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:37:21.894 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:21.895 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:37:22.153 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:22.153 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:37:22.721 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.721 08:50:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:22.721 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.721 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:22.721 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:37:22.980 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:22.980 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:37:23.240 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:37:23.240 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:37:23.240 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:23.240 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:23.240 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:23.500 08:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:37:23.759 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:23.759 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:37:24.018 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:24.018 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:37:24.276 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:24.276 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:37:24.534 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:37:24.534 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:37:24.794 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:24.794 rmmod nvme_tcp 00:37:24.794 rmmod nvme_fabrics 00:37:24.794 rmmod nvme_keyring 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2429225 ']' 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2429225 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 2429225 ']' 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 2429225 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:24.794 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2429225 00:37:25.055 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:25.055 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:25.055 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2429225' 00:37:25.055 killing process with pid 2429225 00:37:25.055 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 2429225 00:37:25.055 08:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 2429225 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:30.370 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:31.751 00:37:31.751 real 1m9.871s 00:37:31.751 user 3m58.450s 00:37:31.751 sys 0m22.833s 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:37:31.751 ************************************ 00:37:31.751 END TEST nvmf_multiconnection 00:37:31.751 ************************************ 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:37:31.751 ************************************ 00:37:31.751 START TEST nvmf_initiator_timeout 00:37:31.751 ************************************ 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:37:31.751 * Looking for test storage... 00:37:31.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.751 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:37:31.752 08:50:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:35.050 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:35.051 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:35.051 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:35.051 Found net devices under 0000:84:00.0: cvl_0_0 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.051 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:35.052 Found net devices under 0000:84:00.1: cvl_0_1 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:35.052 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:35.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:37:35.313 00:37:35.313 --- 10.0.0.2 ping statistics --- 00:37:35.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.313 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:35.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:37:35.313 00:37:35.313 --- 10.0.0.1 ping statistics --- 00:37:35.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.313 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2438279 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2438279 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 2438279 ']' 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:35.313 08:50:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:35.573 [2024-07-23 08:50:47.870810] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:35.573 [2024-07-23 08:50:47.871050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.573 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.832 [2024-07-23 08:50:48.171870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:36.401 [2024-07-23 08:50:48.644159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.401 [2024-07-23 08:50:48.644282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.401 [2024-07-23 08:50:48.644367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.401 [2024-07-23 08:50:48.644416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.401 [2024-07-23 08:50:48.644443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.401 [2024-07-23 08:50:48.644867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.401 [2024-07-23 08:50:48.644972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.401 [2024-07-23 08:50:48.645155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.401 [2024-07-23 08:50:48.645166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:36.971 Malloc0 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.971 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:37.231 Delay0 00:37:37.231 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.231 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:37.231 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.231 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:37.231 [2024-07-23 08:50:49.500931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.231 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.231 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:37.232 [2024-07-23 08:50:49.533355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.232 08:50:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:37.802 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:37:37.802 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:37:37.802 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:37:37.802 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:37:37.802 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2438758 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:37:39.712 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:37:39.712 [global] 00:37:39.712 thread=1 00:37:39.712 invalidate=1 00:37:39.712 rw=write 00:37:39.712 time_based=1 00:37:39.712 runtime=60 00:37:39.712 ioengine=libaio 00:37:39.712 direct=1 00:37:39.712 bs=4096 00:37:39.712 iodepth=1 00:37:39.712 norandommap=0 00:37:39.712 numjobs=1 00:37:39.712 00:37:39.712 verify_dump=1 00:37:39.712 verify_backlog=512 00:37:39.712 verify_state_save=0 00:37:39.712 do_verify=1 00:37:39.712 verify=crc32c-intel 00:37:39.712 [job0] 00:37:39.712 filename=/dev/nvme0n1 00:37:39.712 Could not set queue depth (nvme0n1) 00:37:39.972 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:39.972 fio-3.35 00:37:39.972 Starting 1 thread 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:43.265 true 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:43.265 true 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:43.265 true 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:43.265 true 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.265 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:45.804 true 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:45.804 true 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:45.804 true 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:37:45.804 true 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:37:45.804 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2438758 00:38:42.095 00:38:42.095 job0: (groupid=0, jobs=1): err= 0: pid=2438831: Tue Jul 23 08:51:52 2024 00:38:42.095 read: IOPS=186, BW=747KiB/s (765kB/s)(43.8MiB/60017msec) 00:38:42.095 slat (usec): min=8, max=15351, avg=31.18, stdev=187.96 00:38:42.095 clat (usec): min=363, max=40969k, avg=4838.10, stdev=387087.82 00:38:42.095 lat (usec): min=380, max=40969k, avg=4869.28, stdev=387087.75 00:38:42.095 clat percentiles (usec): 00:38:42.095 | 1.00th=[ 441], 5.00th=[ 474], 10.00th=[ 498], 20.00th=[ 523], 00:38:42.095 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 586], 00:38:42.095 | 70.00th=[ 594], 80.00th=[ 603], 90.00th=[ 619], 95.00th=[ 635], 00:38:42.095 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:42.095 | 99.99th=[42206] 00:38:42.095 write: IOPS=187, BW=751KiB/s (769kB/s)(44.0MiB/60017msec); 0 zone resets 00:38:42.095 slat (usec): min=8, max=29626, avg=36.44, stdev=279.13 00:38:42.095 clat (usec): min=269, max=2035, avg=431.72, stdev=51.51 00:38:42.095 lat (usec): min=280, max=30067, avg=468.16, stdev=284.49 00:38:42.095 clat percentiles (usec): 00:38:42.095 | 1.00th=[ 322], 5.00th=[ 343], 10.00th=[ 363], 20.00th=[ 392], 00:38:42.095 | 30.00th=[ 408], 40.00th=[ 424], 50.00th=[ 437], 60.00th=[ 449], 00:38:42.095 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 490], 95.00th=[ 506], 00:38:42.095 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 586], 99.95th=[ 603], 00:38:42.095 | 99.99th=[ 979] 00:38:42.095 bw ( KiB/s): min= 1168, max= 4096, per=100.00%, avg=3604.48, stdev=670.53, samples=25 00:38:42.095 iops : min= 292, max= 1024, avg=901.12, stdev=167.63, samples=25 00:38:42.095 lat (usec) : 500=52.34%, 750=46.84%, 1000=0.04% 00:38:42.095 lat (msec) : 2=0.01%, 4=0.01%, 50=0.76%, >=2000=0.01% 00:38:42.095 cpu : usr=0.71%, sys=1.47%, ctx=22473, majf=0, minf=39 00:38:42.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.095 issued rwts: total=11203,11264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:42.095 00:38:42.095 Run status group 0 (all jobs): 00:38:42.095 READ: bw=747KiB/s (765kB/s), 747KiB/s-747KiB/s (765kB/s-765kB/s), io=43.8MiB (45.9MB), run=60017-60017msec 00:38:42.095 WRITE: bw=751KiB/s (769kB/s), 751KiB/s-751KiB/s (769kB/s-769kB/s), io=44.0MiB (46.1MB), run=60017-60017msec 00:38:42.095 00:38:42.095 Disk stats (read/write): 00:38:42.095 nvme0n1: ios=11252/11264, merge=0/0, ticks=14197/4672, in_queue=18869, util=100.00% 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:42.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:38:42.095 nvmf hotplug test: fio successful as expected 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.095 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:42.096 rmmod nvme_tcp 00:38:42.096 rmmod nvme_fabrics 00:38:42.096 rmmod nvme_keyring 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2438279 ']' 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2438279 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 2438279 ']' 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 2438279 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2438279 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2438279' 00:38:42.096 killing process with pid 2438279 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 2438279 00:38:42.096 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 2438279 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.038 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:44.948 00:38:44.948 real 1m13.261s 00:38:44.948 user 4m20.663s 00:38:44.948 sys 0m8.953s 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:44.948 ************************************ 00:38:44.948 END TEST nvmf_initiator_timeout 00:38:44.948 ************************************ 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:38:44.948 08:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:48.241 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:48.242 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:48.242 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:48.242 Found net devices under 0000:84:00.0: cvl_0_0 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:48.242 Found net devices under 0000:84:00.1: cvl_0_1 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:48.242 ************************************ 00:38:48.242 START TEST nvmf_perf_adq 00:38:48.242 ************************************ 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:38:48.242 * Looking for test storage... 00:38:48.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:38:48.242 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:51.532 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:51.532 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.532 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:51.533 Found net devices under 0000:84:00.0: cvl_0_0 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:51.533 Found net devices under 0000:84:00.1: cvl_0_1 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:38:51.533 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:38:52.102 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:38:54.642 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:59.930 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:59.930 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:59.930 Found net devices under 0000:84:00.0: cvl_0_0 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:59.930 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:59.931 Found net devices under 0000:84:00.1: cvl_0_1 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:59.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:38:59.931 00:38:59.931 --- 10.0.0.2 ping statistics --- 00:38:59.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.931 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:38:59.931 00:38:59.931 --- 10.0.0.1 ping statistics --- 00:38:59.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.931 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:59.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2450824 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2450824 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2450824 ']' 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:59.931 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:38:59.931 [2024-07-23 08:52:12.231463] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:59.931 [2024-07-23 08:52:12.231766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.931 EAL: No free 2048 kB hugepages reported on node 1 00:39:00.192 [2024-07-23 08:52:12.520716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:00.764 [2024-07-23 08:52:13.032528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:00.764 [2024-07-23 08:52:13.032605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:00.764 [2024-07-23 08:52:13.032679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:00.764 [2024-07-23 08:52:13.032726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:00.764 [2024-07-23 08:52:13.032772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:00.764 [2024-07-23 08:52:13.032994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.764 [2024-07-23 08:52:13.033101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.764 [2024-07-23 08:52:13.033150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.764 [2024-07-23 08:52:13.033164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:00.764 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:00.764 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:39:00.764 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:00.764 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:00.764 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.022 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.593 [2024-07-23 08:52:13.828935] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.593 Malloc1 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:01.593 [2024-07-23 08:52:13.961560] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2451101 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:39:01.593 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:01.853 EAL: No free 2048 kB hugepages reported on node 1 00:39:03.765 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:39:03.765 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.765 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:03.765 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.765 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:39:03.765 "tick_rate": 2700000000, 00:39:03.765 "poll_groups": [ 00:39:03.765 { 00:39:03.765 "name": "nvmf_tgt_poll_group_000", 00:39:03.765 "admin_qpairs": 1, 00:39:03.765 "io_qpairs": 1, 00:39:03.765 "current_admin_qpairs": 1, 00:39:03.765 "current_io_qpairs": 1, 00:39:03.765 "pending_bdev_io": 0, 00:39:03.765 "completed_nvme_io": 12151, 00:39:03.765 "transports": [ 00:39:03.765 { 00:39:03.765 "trtype": "TCP" 00:39:03.765 } 00:39:03.765 ] 00:39:03.765 }, 00:39:03.765 { 00:39:03.765 "name": "nvmf_tgt_poll_group_001", 00:39:03.765 "admin_qpairs": 0, 00:39:03.765 "io_qpairs": 1, 00:39:03.765 "current_admin_qpairs": 0, 00:39:03.765 "current_io_qpairs": 1, 00:39:03.765 "pending_bdev_io": 0, 00:39:03.765 "completed_nvme_io": 12154, 00:39:03.765 "transports": [ 00:39:03.765 { 00:39:03.765 "trtype": "TCP" 00:39:03.765 } 00:39:03.765 ] 00:39:03.765 }, 00:39:03.765 { 00:39:03.765 "name": "nvmf_tgt_poll_group_002", 00:39:03.765 "admin_qpairs": 0, 00:39:03.765 "io_qpairs": 1, 00:39:03.765 "current_admin_qpairs": 0, 00:39:03.765 "current_io_qpairs": 1, 00:39:03.765 "pending_bdev_io": 0, 00:39:03.765 "completed_nvme_io": 12410, 00:39:03.765 "transports": [ 00:39:03.765 { 00:39:03.765 "trtype": "TCP" 00:39:03.765 } 00:39:03.765 ] 00:39:03.765 }, 00:39:03.765 { 00:39:03.765 "name": "nvmf_tgt_poll_group_003", 00:39:03.765 "admin_qpairs": 0, 00:39:03.765 "io_qpairs": 1, 00:39:03.765 "current_admin_qpairs": 0, 00:39:03.765 "current_io_qpairs": 1, 00:39:03.765 "pending_bdev_io": 0, 00:39:03.765 "completed_nvme_io": 12017, 00:39:03.765 "transports": [ 00:39:03.765 { 00:39:03.765 "trtype": "TCP" 00:39:03.765 } 00:39:03.765 ] 00:39:03.765 } 00:39:03.765 ] 00:39:03.765 }' 00:39:03.765 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:39:03.765 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:39:03.765 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:39:03.766 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:39:03.766 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2451101 00:39:11.898 Initializing NVMe Controllers 00:39:11.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:11.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:39:11.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:39:11.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:39:11.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:39:11.898 Initialization complete. Launching workers. 00:39:11.898 ======================================================== 00:39:11.898 Latency(us) 00:39:11.898 Device Information : IOPS MiB/s Average min max 00:39:11.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7000.10 27.34 9145.99 4353.38 14488.92 00:39:11.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6987.20 27.29 9164.12 3415.96 13787.40 00:39:11.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7094.20 27.71 9023.12 2820.63 14472.03 00:39:11.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6977.60 27.26 9176.57 3903.91 14529.51 00:39:11.899 ======================================================== 00:39:11.899 Total : 28059.09 109.61 9127.04 2820.63 14529.51 00:39:11.899 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:11.899 rmmod nvme_tcp 00:39:11.899 rmmod nvme_fabrics 00:39:11.899 rmmod nvme_keyring 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2450824 ']' 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2450824 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2450824 ']' 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2450824 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2450824 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2450824' 00:39:11.899 killing process with pid 2450824 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2450824 00:39:11.899 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2450824 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.442 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.351 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:16.351 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:39:16.351 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:39:16.921 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:39:19.469 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:24.749 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:24.750 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:24.750 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:24.750 Found net devices under 0000:84:00.0: cvl_0_0 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:24.750 Found net devices under 0000:84:00.1: cvl_0_1 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:24.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:39:24.750 00:39:24.750 --- 10.0.0.2 ping statistics --- 00:39:24.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.750 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:24.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:39:24.750 00:39:24.750 --- 10.0.0.1 ping statistics --- 00:39:24.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.750 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:39:24.750 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:39:24.751 net.core.busy_poll = 1 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:39:24.751 net.core.busy_read = 1 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:39:24.751 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2453850 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2453850 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2453850 ']' 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:24.751 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:24.751 [2024-07-23 08:52:37.236820] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:24.751 [2024-07-23 08:52:37.237151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:25.010 EAL: No free 2048 kB hugepages reported on node 1 00:39:25.270 [2024-07-23 08:52:37.558758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:25.529 [2024-07-23 08:52:38.044091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:25.529 [2024-07-23 08:52:38.044220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:25.529 [2024-07-23 08:52:38.044283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:25.529 [2024-07-23 08:52:38.044366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:25.529 [2024-07-23 08:52:38.044395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:25.529 [2024-07-23 08:52:38.044519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:25.529 [2024-07-23 08:52:38.044597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:25.529 [2024-07-23 08:52:38.044649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.529 [2024-07-23 08:52:38.044662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.096 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.664 [2024-07-23 08:52:39.059942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.664 Malloc1 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.664 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:26.927 [2024-07-23 08:52:39.193264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2454040 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:39:26.927 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:26.927 EAL: No free 2048 kB hugepages reported on node 1 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:39:28.833 "tick_rate": 2700000000, 00:39:28.833 "poll_groups": [ 00:39:28.833 { 00:39:28.833 "name": "nvmf_tgt_poll_group_000", 00:39:28.833 "admin_qpairs": 1, 00:39:28.833 "io_qpairs": 1, 00:39:28.833 "current_admin_qpairs": 1, 00:39:28.833 "current_io_qpairs": 1, 00:39:28.833 "pending_bdev_io": 0, 00:39:28.833 "completed_nvme_io": 13956, 00:39:28.833 "transports": [ 00:39:28.833 { 00:39:28.833 "trtype": "TCP" 00:39:28.833 } 00:39:28.833 ] 00:39:28.833 }, 00:39:28.833 { 00:39:28.833 "name": "nvmf_tgt_poll_group_001", 00:39:28.833 "admin_qpairs": 0, 00:39:28.833 "io_qpairs": 3, 00:39:28.833 "current_admin_qpairs": 0, 00:39:28.833 "current_io_qpairs": 3, 00:39:28.833 "pending_bdev_io": 0, 00:39:28.833 "completed_nvme_io": 14471, 00:39:28.833 "transports": [ 00:39:28.833 { 00:39:28.833 "trtype": "TCP" 00:39:28.833 } 00:39:28.833 ] 00:39:28.833 }, 00:39:28.833 { 00:39:28.833 "name": "nvmf_tgt_poll_group_002", 00:39:28.833 "admin_qpairs": 0, 00:39:28.833 "io_qpairs": 0, 00:39:28.833 "current_admin_qpairs": 0, 00:39:28.833 "current_io_qpairs": 0, 00:39:28.833 "pending_bdev_io": 0, 00:39:28.833 "completed_nvme_io": 0, 00:39:28.833 "transports": [ 00:39:28.833 { 00:39:28.833 "trtype": "TCP" 00:39:28.833 } 00:39:28.833 ] 00:39:28.833 }, 00:39:28.833 { 00:39:28.833 "name": "nvmf_tgt_poll_group_003", 00:39:28.833 "admin_qpairs": 0, 00:39:28.833 "io_qpairs": 0, 00:39:28.833 "current_admin_qpairs": 0, 00:39:28.833 "current_io_qpairs": 0, 00:39:28.833 "pending_bdev_io": 0, 00:39:28.833 "completed_nvme_io": 0, 00:39:28.833 "transports": [ 00:39:28.833 { 00:39:28.833 "trtype": "TCP" 00:39:28.833 } 00:39:28.833 ] 00:39:28.833 } 00:39:28.833 ] 00:39:28.833 }' 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:39:28.833 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2454040 00:39:38.811 Initializing NVMe Controllers 00:39:38.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:38.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:39:38.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:39:38.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:39:38.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:39:38.811 Initialization complete. Launching workers. 00:39:38.811 ======================================================== 00:39:38.811 Latency(us) 00:39:38.811 Device Information : IOPS MiB/s Average min max 00:39:38.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 2807.80 10.97 22872.03 3862.69 75780.67 00:39:38.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 2740.80 10.71 23363.66 4034.08 72472.38 00:39:38.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 2627.10 10.26 24373.08 4219.35 77136.63 00:39:38.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7965.70 31.12 8036.86 3291.31 11684.85 00:39:38.811 ======================================================== 00:39:38.811 Total : 16141.40 63.05 15878.73 3291.31 77136.63 00:39:38.811 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:38.811 rmmod nvme_tcp 00:39:38.811 rmmod nvme_fabrics 00:39:38.811 rmmod nvme_keyring 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2453850 ']' 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2453850 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2453850 ']' 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2453850 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2453850 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2453850' 00:39:38.811 killing process with pid 2453850 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2453850 00:39:38.811 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2453850 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.383 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:41.927 00:39:41.927 real 0m53.236s 00:39:41.927 user 2m59.389s 00:39:41.927 sys 0m11.835s 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:39:41.927 ************************************ 00:39:41.927 END TEST nvmf_perf_adq 00:39:41.927 ************************************ 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:41.927 ************************************ 00:39:41.927 START TEST nvmf_shutdown 00:39:41.927 ************************************ 00:39:41.927 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:39:41.927 * Looking for test storage... 00:39:41.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:41.927 ************************************ 00:39:41.927 START TEST nvmf_shutdown_tc1 00:39:41.927 ************************************ 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:41.927 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:39:41.928 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:45.221 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:45.221 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:45.221 Found net devices under 0000:84:00.0: cvl_0_0 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:45.221 Found net devices under 0000:84:00.1: cvl_0_1 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:45.221 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:45.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:45.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:39:45.222 00:39:45.222 --- 10.0.0.2 ping statistics --- 00:39:45.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.222 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:45.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:45.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:39:45.222 00:39:45.222 --- 10.0.0.1 ping statistics --- 00:39:45.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.222 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2457565 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2457565 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2457565 ']' 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:45.222 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:45.222 [2024-07-23 08:52:57.466195] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:45.222 [2024-07-23 08:52:57.466390] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.222 EAL: No free 2048 kB hugepages reported on node 1 00:39:45.222 [2024-07-23 08:52:57.662124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:45.480 [2024-07-23 08:52:57.985550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:45.480 [2024-07-23 08:52:57.985627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:45.480 [2024-07-23 08:52:57.985662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:45.480 [2024-07-23 08:52:57.985688] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:45.480 [2024-07-23 08:52:57.985714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:45.480 [2024-07-23 08:52:57.985862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:45.480 [2024-07-23 08:52:57.985926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:45.480 [2024-07-23 08:52:57.988389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:39:45.480 [2024-07-23 08:52:57.988392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:46.855 [2024-07-23 08:52:59.136902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.855 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:46.855 Malloc1 00:39:46.855 [2024-07-23 08:52:59.317328] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.113 Malloc2 00:39:47.114 Malloc3 00:39:47.371 Malloc4 00:39:47.371 Malloc5 00:39:47.629 Malloc6 00:39:47.629 Malloc7 00:39:47.887 Malloc8 00:39:47.887 Malloc9 00:39:48.147 Malloc10 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2457883 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2457883 /var/tmp/bdevperf.sock 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2457883 ']' 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:48.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.147 { 00:39:48.147 "params": { 00:39:48.147 "name": "Nvme$subsystem", 00:39:48.147 "trtype": "$TEST_TRANSPORT", 00:39:48.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.147 "adrfam": "ipv4", 00:39:48.147 "trsvcid": "$NVMF_PORT", 00:39:48.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.147 "hdgst": ${hdgst:-false}, 00:39:48.147 "ddgst": ${ddgst:-false} 00:39:48.147 }, 00:39:48.147 "method": "bdev_nvme_attach_controller" 00:39:48.147 } 00:39:48.147 EOF 00:39:48.147 )") 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.147 { 00:39:48.147 "params": { 00:39:48.147 "name": "Nvme$subsystem", 00:39:48.147 "trtype": "$TEST_TRANSPORT", 00:39:48.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.147 "adrfam": "ipv4", 00:39:48.147 "trsvcid": "$NVMF_PORT", 00:39:48.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.147 "hdgst": ${hdgst:-false}, 00:39:48.147 "ddgst": ${ddgst:-false} 00:39:48.147 }, 00:39:48.147 "method": "bdev_nvme_attach_controller" 00:39:48.147 } 00:39:48.147 EOF 00:39:48.147 )") 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.147 { 00:39:48.147 "params": { 00:39:48.147 "name": "Nvme$subsystem", 00:39:48.147 "trtype": "$TEST_TRANSPORT", 00:39:48.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.147 "adrfam": "ipv4", 00:39:48.147 "trsvcid": "$NVMF_PORT", 00:39:48.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.147 "hdgst": ${hdgst:-false}, 00:39:48.147 "ddgst": ${ddgst:-false} 00:39:48.147 }, 00:39:48.147 "method": "bdev_nvme_attach_controller" 00:39:48.147 } 00:39:48.147 EOF 00:39:48.147 )") 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.147 { 00:39:48.147 "params": { 00:39:48.147 "name": "Nvme$subsystem", 00:39:48.147 "trtype": "$TEST_TRANSPORT", 00:39:48.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.147 "adrfam": "ipv4", 00:39:48.147 "trsvcid": "$NVMF_PORT", 00:39:48.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.147 "hdgst": ${hdgst:-false}, 00:39:48.147 "ddgst": ${ddgst:-false} 00:39:48.147 }, 00:39:48.147 "method": "bdev_nvme_attach_controller" 00:39:48.147 } 00:39:48.147 EOF 00:39:48.147 )") 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.147 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.147 { 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme$subsystem", 00:39:48.148 "trtype": "$TEST_TRANSPORT", 00:39:48.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "$NVMF_PORT", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.148 "hdgst": ${hdgst:-false}, 00:39:48.148 "ddgst": ${ddgst:-false} 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 } 00:39:48.148 EOF 00:39:48.148 )") 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.148 { 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme$subsystem", 00:39:48.148 "trtype": "$TEST_TRANSPORT", 00:39:48.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "$NVMF_PORT", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.148 "hdgst": ${hdgst:-false}, 00:39:48.148 "ddgst": ${ddgst:-false} 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 } 00:39:48.148 EOF 00:39:48.148 )") 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.148 { 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme$subsystem", 00:39:48.148 "trtype": "$TEST_TRANSPORT", 00:39:48.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "$NVMF_PORT", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.148 "hdgst": ${hdgst:-false}, 00:39:48.148 "ddgst": ${ddgst:-false} 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 } 00:39:48.148 EOF 00:39:48.148 )") 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.148 { 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme$subsystem", 00:39:48.148 "trtype": "$TEST_TRANSPORT", 00:39:48.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "$NVMF_PORT", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.148 "hdgst": ${hdgst:-false}, 00:39:48.148 "ddgst": ${ddgst:-false} 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 } 00:39:48.148 EOF 00:39:48.148 )") 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.148 { 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme$subsystem", 00:39:48.148 "trtype": "$TEST_TRANSPORT", 00:39:48.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "$NVMF_PORT", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.148 "hdgst": ${hdgst:-false}, 00:39:48.148 "ddgst": ${ddgst:-false} 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 } 00:39:48.148 EOF 00:39:48.148 )") 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:48.148 { 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme$subsystem", 00:39:48.148 "trtype": "$TEST_TRANSPORT", 00:39:48.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "$NVMF_PORT", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.148 "hdgst": ${hdgst:-false}, 00:39:48.148 "ddgst": ${ddgst:-false} 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 } 00:39:48.148 EOF 00:39:48.148 )") 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:39:48.148 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme1", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme2", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme3", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme4", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme5", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme6", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme7", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme8", 00:39:48.148 "trtype": "tcp", 00:39:48.148 "traddr": "10.0.0.2", 00:39:48.148 "adrfam": "ipv4", 00:39:48.148 "trsvcid": "4420", 00:39:48.148 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:39:48.148 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:39:48.148 "hdgst": false, 00:39:48.148 "ddgst": false 00:39:48.148 }, 00:39:48.148 "method": "bdev_nvme_attach_controller" 00:39:48.148 },{ 00:39:48.148 "params": { 00:39:48.148 "name": "Nvme9", 00:39:48.149 "trtype": "tcp", 00:39:48.149 "traddr": "10.0.0.2", 00:39:48.149 "adrfam": "ipv4", 00:39:48.149 "trsvcid": "4420", 00:39:48.149 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:39:48.149 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:39:48.149 "hdgst": false, 00:39:48.149 "ddgst": false 00:39:48.149 }, 00:39:48.149 "method": "bdev_nvme_attach_controller" 00:39:48.149 },{ 00:39:48.149 "params": { 00:39:48.149 "name": "Nvme10", 00:39:48.149 "trtype": "tcp", 00:39:48.149 "traddr": "10.0.0.2", 00:39:48.149 "adrfam": "ipv4", 00:39:48.149 "trsvcid": "4420", 00:39:48.149 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:39:48.149 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:39:48.149 "hdgst": false, 00:39:48.149 "ddgst": false 00:39:48.149 }, 00:39:48.149 "method": "bdev_nvme_attach_controller" 00:39:48.149 }' 00:39:48.407 [2024-07-23 08:53:00.670034] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:48.407 [2024-07-23 08:53:00.670205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:48.407 EAL: No free 2048 kB hugepages reported on node 1 00:39:48.407 [2024-07-23 08:53:00.873038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.974 [2024-07-23 08:53:01.188155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2457883 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:39:52.269 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:39:52.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2457883 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2457565 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.846 { 00:39:52.846 "params": { 00:39:52.846 "name": "Nvme$subsystem", 00:39:52.846 "trtype": "$TEST_TRANSPORT", 00:39:52.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.846 "adrfam": "ipv4", 00:39:52.846 "trsvcid": "$NVMF_PORT", 00:39:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.846 "hdgst": ${hdgst:-false}, 00:39:52.846 "ddgst": ${ddgst:-false} 00:39:52.846 }, 00:39:52.846 "method": "bdev_nvme_attach_controller" 00:39:52.846 } 00:39:52.846 EOF 00:39:52.846 )") 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.846 { 00:39:52.846 "params": { 00:39:52.846 "name": "Nvme$subsystem", 00:39:52.846 "trtype": "$TEST_TRANSPORT", 00:39:52.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.846 "adrfam": "ipv4", 00:39:52.846 "trsvcid": "$NVMF_PORT", 00:39:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.846 "hdgst": ${hdgst:-false}, 00:39:52.846 "ddgst": ${ddgst:-false} 00:39:52.846 }, 00:39:52.846 "method": "bdev_nvme_attach_controller" 00:39:52.846 } 00:39:52.846 EOF 00:39:52.846 )") 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.846 { 00:39:52.846 "params": { 00:39:52.846 "name": "Nvme$subsystem", 00:39:52.846 "trtype": "$TEST_TRANSPORT", 00:39:52.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.846 "adrfam": "ipv4", 00:39:52.846 "trsvcid": "$NVMF_PORT", 00:39:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.846 "hdgst": ${hdgst:-false}, 00:39:52.846 "ddgst": ${ddgst:-false} 00:39:52.846 }, 00:39:52.846 "method": "bdev_nvme_attach_controller" 00:39:52.846 } 00:39:52.846 EOF 00:39:52.846 )") 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.846 { 00:39:52.846 "params": { 00:39:52.846 "name": "Nvme$subsystem", 00:39:52.846 "trtype": "$TEST_TRANSPORT", 00:39:52.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.846 "adrfam": "ipv4", 00:39:52.846 "trsvcid": "$NVMF_PORT", 00:39:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.846 "hdgst": ${hdgst:-false}, 00:39:52.846 "ddgst": ${ddgst:-false} 00:39:52.846 }, 00:39:52.846 "method": "bdev_nvme_attach_controller" 00:39:52.846 } 00:39:52.846 EOF 00:39:52.846 )") 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.846 { 00:39:52.846 "params": { 00:39:52.846 "name": "Nvme$subsystem", 00:39:52.846 "trtype": "$TEST_TRANSPORT", 00:39:52.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.846 "adrfam": "ipv4", 00:39:52.846 "trsvcid": "$NVMF_PORT", 00:39:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.846 "hdgst": ${hdgst:-false}, 00:39:52.846 "ddgst": ${ddgst:-false} 00:39:52.846 }, 00:39:52.846 "method": "bdev_nvme_attach_controller" 00:39:52.846 } 00:39:52.846 EOF 00:39:52.846 )") 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.846 { 00:39:52.846 "params": { 00:39:52.846 "name": "Nvme$subsystem", 00:39:52.846 "trtype": "$TEST_TRANSPORT", 00:39:52.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.846 "adrfam": "ipv4", 00:39:52.846 "trsvcid": "$NVMF_PORT", 00:39:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.846 "hdgst": ${hdgst:-false}, 00:39:52.846 "ddgst": ${ddgst:-false} 00:39:52.846 }, 00:39:52.846 "method": "bdev_nvme_attach_controller" 00:39:52.846 } 00:39:52.846 EOF 00:39:52.846 )") 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.846 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.847 { 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme$subsystem", 00:39:52.847 "trtype": "$TEST_TRANSPORT", 00:39:52.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "$NVMF_PORT", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.847 "hdgst": ${hdgst:-false}, 00:39:52.847 "ddgst": ${ddgst:-false} 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 } 00:39:52.847 EOF 00:39:52.847 )") 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.847 { 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme$subsystem", 00:39:52.847 "trtype": "$TEST_TRANSPORT", 00:39:52.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "$NVMF_PORT", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.847 "hdgst": ${hdgst:-false}, 00:39:52.847 "ddgst": ${ddgst:-false} 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 } 00:39:52.847 EOF 00:39:52.847 )") 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.847 { 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme$subsystem", 00:39:52.847 "trtype": "$TEST_TRANSPORT", 00:39:52.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "$NVMF_PORT", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.847 "hdgst": ${hdgst:-false}, 00:39:52.847 "ddgst": ${ddgst:-false} 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 } 00:39:52.847 EOF 00:39:52.847 )") 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:52.847 { 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme$subsystem", 00:39:52.847 "trtype": "$TEST_TRANSPORT", 00:39:52.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "$NVMF_PORT", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.847 "hdgst": ${hdgst:-false}, 00:39:52.847 "ddgst": ${ddgst:-false} 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 } 00:39:52.847 EOF 00:39:52.847 )") 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:39:52.847 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme1", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme2", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme3", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme4", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme5", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme6", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme7", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme8", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme9", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 },{ 00:39:52.847 "params": { 00:39:52.847 "name": "Nvme10", 00:39:52.847 "trtype": "tcp", 00:39:52.847 "traddr": "10.0.0.2", 00:39:52.847 "adrfam": "ipv4", 00:39:52.847 "trsvcid": "4420", 00:39:52.847 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:39:52.847 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:39:52.847 "hdgst": false, 00:39:52.847 "ddgst": false 00:39:52.847 }, 00:39:52.847 "method": "bdev_nvme_attach_controller" 00:39:52.847 }' 00:39:53.114 [2024-07-23 08:53:05.448077] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:53.114 [2024-07-23 08:53:05.448415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458434 ] 00:39:53.114 EAL: No free 2048 kB hugepages reported on node 1 00:39:53.373 [2024-07-23 08:53:05.703482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.632 [2024-07-23 08:53:06.015999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.533 Running I/O for 1 seconds... 00:39:56.909 00:39:56.909 Latency(us) 00:39:56.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.909 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.909 Verification LBA range: start 0x0 length 0x400 00:39:56.909 Nvme1n1 : 1.29 149.41 9.34 0.00 0.00 416218.83 31651.46 400789.05 00:39:56.909 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.909 Verification LBA range: start 0x0 length 0x400 00:39:56.909 Nvme2n1 : 1.30 147.68 9.23 0.00 0.00 414285.62 28738.75 403895.94 00:39:56.909 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.909 Verification LBA range: start 0x0 length 0x400 00:39:56.909 Nvme3n1 : 1.26 151.88 9.49 0.00 0.00 398597.44 34564.17 400789.05 00:39:56.909 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.909 Verification LBA range: start 0x0 length 0x400 00:39:56.909 Nvme4n1 : 1.28 150.38 9.40 0.00 0.00 393621.55 28544.57 372827.02 00:39:56.909 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.909 Verification LBA range: start 0x0 length 0x400 00:39:56.909 Nvme5n1 : 1.33 144.81 9.05 0.00 0.00 401418.11 52428.80 383701.14 00:39:56.909 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.909 Verification LBA range: start 0x0 length 0x400 00:39:56.909 Nvme6n1 : 1.33 144.03 9.00 0.00 0.00 395043.52 31457.28 407002.83 00:39:56.909 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.910 Verification LBA range: start 0x0 length 0x400 00:39:56.910 Nvme7n1 : 1.29 148.77 9.30 0.00 0.00 371297.85 54370.61 403895.94 00:39:56.910 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.910 Verification LBA range: start 0x0 length 0x400 00:39:56.910 Nvme8n1 : 1.31 146.25 9.14 0.00 0.00 370577.76 30098.01 403895.94 00:39:56.910 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.910 Verification LBA range: start 0x0 length 0x400 00:39:56.910 Nvme9n1 : 1.34 143.00 8.94 0.00 0.00 371615.42 38641.97 419430.40 00:39:56.910 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.910 Verification LBA range: start 0x0 length 0x400 00:39:56.910 Nvme10n1 : 1.35 141.89 8.87 0.00 0.00 366087.84 29709.65 447392.43 00:39:56.910 =================================================================================================================== 00:39:56.910 Total : 1468.10 91.76 0.00 0.00 389876.39 28544.57 447392.43 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:58.286 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:58.286 rmmod nvme_tcp 00:39:58.286 rmmod nvme_fabrics 00:39:58.546 rmmod nvme_keyring 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2457565 ']' 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2457565 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2457565 ']' 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2457565 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2457565 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2457565' 00:39:58.546 killing process with pid 2457565 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2457565 00:39:58.546 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2457565 00:40:02.733 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:02.734 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:02.734 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:02.734 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:02.734 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:02.734 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.734 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.734 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.644 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:04.644 00:40:04.644 real 0m22.791s 00:40:04.644 user 1m15.312s 00:40:04.644 sys 0m5.621s 00:40:04.644 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:40:04.645 ************************************ 00:40:04.645 END TEST nvmf_shutdown_tc1 00:40:04.645 ************************************ 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:04.645 ************************************ 00:40:04.645 START TEST nvmf_shutdown_tc2 00:40:04.645 ************************************ 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:04.645 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:04.645 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:04.645 Found net devices under 0000:84:00.0: cvl_0_0 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:04.645 Found net devices under 0000:84:00.1: cvl_0_1 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.645 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:04.646 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:04.646 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:04.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:04.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:40:04.907 00:40:04.907 --- 10.0.0.2 ping statistics --- 00:40:04.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.907 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:04.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:04.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:40:04.907 00:40:04.907 --- 10.0.0.1 ping statistics --- 00:40:04.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.907 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2459846 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2459846 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2459846 ']' 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:04.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:04.907 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.166 [2024-07-23 08:53:17.449366] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:05.166 [2024-07-23 08:53:17.449675] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.166 EAL: No free 2048 kB hugepages reported on node 1 00:40:05.425 [2024-07-23 08:53:17.732326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:05.684 [2024-07-23 08:53:18.055727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.684 [2024-07-23 08:53:18.055811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.684 [2024-07-23 08:53:18.055846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.684 [2024-07-23 08:53:18.055872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.684 [2024-07-23 08:53:18.055898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.684 [2024-07-23 08:53:18.056125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:05.684 [2024-07-23 08:53:18.056245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:05.684 [2024-07-23 08:53:18.056337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.684 [2024-07-23 08:53:18.056379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.620 [2024-07-23 08:53:18.968974] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.620 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.621 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.621 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.621 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.621 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:06.621 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:40:06.621 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:40:06.621 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:06.621 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.621 Malloc1 00:40:06.621 [2024-07-23 08:53:19.129877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:06.879 Malloc2 00:40:06.879 Malloc3 00:40:07.138 Malloc4 00:40:07.138 Malloc5 00:40:07.396 Malloc6 00:40:07.396 Malloc7 00:40:07.658 Malloc8 00:40:07.658 Malloc9 00:40:07.918 Malloc10 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2460207 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2460207 /var/tmp/bdevperf.sock 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2460207 ']' 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.918 { 00:40:07.918 "params": { 00:40:07.918 "name": "Nvme$subsystem", 00:40:07.918 "trtype": "$TEST_TRANSPORT", 00:40:07.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.918 "adrfam": "ipv4", 00:40:07.918 "trsvcid": "$NVMF_PORT", 00:40:07.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.918 "hdgst": ${hdgst:-false}, 00:40:07.918 "ddgst": ${ddgst:-false} 00:40:07.918 }, 00:40:07.918 "method": "bdev_nvme_attach_controller" 00:40:07.918 } 00:40:07.918 EOF 00:40:07.918 )") 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.918 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.918 { 00:40:07.918 "params": { 00:40:07.918 "name": "Nvme$subsystem", 00:40:07.918 "trtype": "$TEST_TRANSPORT", 00:40:07.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.918 "adrfam": "ipv4", 00:40:07.918 "trsvcid": "$NVMF_PORT", 00:40:07.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.918 "hdgst": ${hdgst:-false}, 00:40:07.918 "ddgst": ${ddgst:-false} 00:40:07.918 }, 00:40:07.918 "method": "bdev_nvme_attach_controller" 00:40:07.918 } 00:40:07.918 EOF 00:40:07.918 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:07.919 { 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme$subsystem", 00:40:07.919 "trtype": "$TEST_TRANSPORT", 00:40:07.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "$NVMF_PORT", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.919 "hdgst": ${hdgst:-false}, 00:40:07.919 "ddgst": ${ddgst:-false} 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 } 00:40:07.919 EOF 00:40:07.919 )") 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:40:07.919 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme1", 00:40:07.919 "trtype": "tcp", 00:40:07.919 "traddr": "10.0.0.2", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "4420", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:07.919 "hdgst": false, 00:40:07.919 "ddgst": false 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 },{ 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme2", 00:40:07.919 "trtype": "tcp", 00:40:07.919 "traddr": "10.0.0.2", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "4420", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:07.919 "hdgst": false, 00:40:07.919 "ddgst": false 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 },{ 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme3", 00:40:07.919 "trtype": "tcp", 00:40:07.919 "traddr": "10.0.0.2", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "4420", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:40:07.919 "hdgst": false, 00:40:07.919 "ddgst": false 00:40:07.919 }, 00:40:07.919 "method": "bdev_nvme_attach_controller" 00:40:07.919 },{ 00:40:07.919 "params": { 00:40:07.919 "name": "Nvme4", 00:40:07.919 "trtype": "tcp", 00:40:07.919 "traddr": "10.0.0.2", 00:40:07.919 "adrfam": "ipv4", 00:40:07.919 "trsvcid": "4420", 00:40:07.919 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:40:07.919 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:40:07.919 "hdgst": false, 00:40:07.919 "ddgst": false 00:40:07.920 }, 00:40:07.920 "method": "bdev_nvme_attach_controller" 00:40:07.920 },{ 00:40:07.920 "params": { 00:40:07.920 "name": "Nvme5", 00:40:07.920 "trtype": "tcp", 00:40:07.920 "traddr": "10.0.0.2", 00:40:07.920 "adrfam": "ipv4", 00:40:07.920 "trsvcid": "4420", 00:40:07.920 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:40:07.920 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:40:07.920 "hdgst": false, 00:40:07.920 "ddgst": false 00:40:07.920 }, 00:40:07.920 "method": "bdev_nvme_attach_controller" 00:40:07.920 },{ 00:40:07.920 "params": { 00:40:07.920 "name": "Nvme6", 00:40:07.920 "trtype": "tcp", 00:40:07.920 "traddr": "10.0.0.2", 00:40:07.920 "adrfam": "ipv4", 00:40:07.920 "trsvcid": "4420", 00:40:07.920 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:40:07.920 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:40:07.920 "hdgst": false, 00:40:07.920 "ddgst": false 00:40:07.920 }, 00:40:07.920 "method": "bdev_nvme_attach_controller" 00:40:07.920 },{ 00:40:07.920 "params": { 00:40:07.920 "name": "Nvme7", 00:40:07.920 "trtype": "tcp", 00:40:07.920 "traddr": "10.0.0.2", 00:40:07.920 "adrfam": "ipv4", 00:40:07.920 "trsvcid": "4420", 00:40:07.920 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:40:07.920 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:40:07.920 "hdgst": false, 00:40:07.920 "ddgst": false 00:40:07.920 }, 00:40:07.920 "method": "bdev_nvme_attach_controller" 00:40:07.920 },{ 00:40:07.920 "params": { 00:40:07.920 "name": "Nvme8", 00:40:07.920 "trtype": "tcp", 00:40:07.920 "traddr": "10.0.0.2", 00:40:07.920 "adrfam": "ipv4", 00:40:07.920 "trsvcid": "4420", 00:40:07.920 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:40:07.920 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:40:07.920 "hdgst": false, 00:40:07.920 "ddgst": false 00:40:07.920 }, 00:40:07.920 "method": "bdev_nvme_attach_controller" 00:40:07.920 },{ 00:40:07.920 "params": { 00:40:07.920 "name": "Nvme9", 00:40:07.920 "trtype": "tcp", 00:40:07.920 "traddr": "10.0.0.2", 00:40:07.920 "adrfam": "ipv4", 00:40:07.920 "trsvcid": "4420", 00:40:07.920 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:40:07.920 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:40:07.920 "hdgst": false, 00:40:07.920 "ddgst": false 00:40:07.920 }, 00:40:07.920 "method": "bdev_nvme_attach_controller" 00:40:07.920 },{ 00:40:07.920 "params": { 00:40:07.920 "name": "Nvme10", 00:40:07.920 "trtype": "tcp", 00:40:07.920 "traddr": "10.0.0.2", 00:40:07.920 "adrfam": "ipv4", 00:40:07.920 "trsvcid": "4420", 00:40:07.920 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:40:07.920 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:40:07.920 "hdgst": false, 00:40:07.920 "ddgst": false 00:40:07.920 }, 00:40:07.920 "method": "bdev_nvme_attach_controller" 00:40:07.920 }' 00:40:08.178 [2024-07-23 08:53:20.470301] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:08.178 [2024-07-23 08:53:20.470539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460207 ] 00:40:08.178 EAL: No free 2048 kB hugepages reported on node 1 00:40:08.437 [2024-07-23 08:53:20.724533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.695 [2024-07-23 08:53:21.037747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.599 Running I/O for 10 seconds... 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:40:11.166 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:40:11.167 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:40:11.167 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:40:11.167 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.167 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:11.457 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.457 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:40:11.457 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:40:11.457 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:40:11.717 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:40:11.717 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:40:11.717 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:40:11.717 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:40:11.717 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:11.717 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2460207 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2460207 ']' 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2460207 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2460207 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2460207' 00:40:11.717 killing process with pid 2460207 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2460207 00:40:11.717 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2460207 00:40:11.975 Received shutdown signal, test time was about 1.229295 seconds 00:40:11.975 00:40:11.975 Latency(us) 00:40:11.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.975 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme1n1 : 1.20 159.41 9.96 0.00 0.00 394948.46 30680.56 385254.59 00:40:11.975 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme2n1 : 1.21 158.42 9.90 0.00 0.00 389555.39 29515.47 410109.72 00:40:11.975 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme3n1 : 1.20 160.59 10.04 0.00 0.00 374702.59 27185.30 413216.62 00:40:11.975 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme4n1 : 1.18 165.70 10.36 0.00 0.00 353368.20 6019.60 400789.05 00:40:11.975 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme5n1 : 1.23 156.33 9.77 0.00 0.00 368888.79 36311.80 389914.93 00:40:11.975 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme6n1 : 1.12 114.51 7.16 0.00 0.00 485215.19 29903.83 416323.51 00:40:11.975 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme7n1 : 1.22 157.32 9.83 0.00 0.00 348478.58 29903.83 400789.05 00:40:11.975 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.975 Verification LBA range: start 0x0 length 0x400 00:40:11.975 Nvme8n1 : 1.19 161.11 10.07 0.00 0.00 330298.91 31651.46 422537.29 00:40:11.975 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.976 Verification LBA range: start 0x0 length 0x400 00:40:11.976 Nvme9n1 : 1.15 110.90 6.93 0.00 0.00 463233.33 55535.69 422537.29 00:40:11.976 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:11.976 Verification LBA range: start 0x0 length 0x400 00:40:11.976 Nvme10n1 : 1.15 111.52 6.97 0.00 0.00 447322.64 29127.11 450499.32 00:40:11.976 =================================================================================================================== 00:40:11.976 Total : 1455.82 90.99 0.00 0.00 387782.01 6019.60 450499.32 00:40:13.351 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2459846 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:14.285 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:14.285 rmmod nvme_tcp 00:40:14.285 rmmod nvme_fabrics 00:40:14.544 rmmod nvme_keyring 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2459846 ']' 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2459846 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2459846 ']' 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2459846 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2459846 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2459846' 00:40:14.544 killing process with pid 2459846 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2459846 00:40:14.544 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2459846 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:18.734 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.646 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:20.646 00:40:20.646 real 0m15.919s 00:40:20.646 user 0m54.877s 00:40:20.646 sys 0m2.776s 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:20.647 ************************************ 00:40:20.647 END TEST nvmf_shutdown_tc2 00:40:20.647 ************************************ 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:20.647 ************************************ 00:40:20.647 START TEST nvmf_shutdown_tc3 00:40:20.647 ************************************ 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:20.647 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:20.647 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:20.647 Found net devices under 0000:84:00.0: cvl_0_0 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:20.647 Found net devices under 0000:84:00.1: cvl_0_1 00:40:20.647 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:20.648 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:20.648 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:20.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:20.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:40:20.908 00:40:20.908 --- 10.0.0.2 ping statistics --- 00:40:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:20.908 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:20.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:20.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:40:20.908 00:40:20.908 --- 10.0.0.1 ping statistics --- 00:40:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:20.908 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2461717 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2461717 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2461717 ']' 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:20.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:20.908 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:21.166 [2024-07-23 08:53:33.468583] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:21.166 [2024-07-23 08:53:33.468925] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:21.166 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.425 [2024-07-23 08:53:33.737264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:21.684 [2024-07-23 08:53:34.059939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:21.684 [2024-07-23 08:53:34.060027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:21.684 [2024-07-23 08:53:34.060061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:21.684 [2024-07-23 08:53:34.060087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:21.684 [2024-07-23 08:53:34.060113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:21.684 [2024-07-23 08:53:34.060276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:21.684 [2024-07-23 08:53:34.060371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:21.684 [2024-07-23 08:53:34.060427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:21.684 [2024-07-23 08:53:34.060459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:22.621 [2024-07-23 08:53:34.905665] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.621 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:22.621 Malloc1 00:40:22.621 [2024-07-23 08:53:35.077664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:22.880 Malloc2 00:40:22.880 Malloc3 00:40:23.138 Malloc4 00:40:23.138 Malloc5 00:40:23.397 Malloc6 00:40:23.397 Malloc7 00:40:23.656 Malloc8 00:40:23.656 Malloc9 00:40:23.915 Malloc10 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2462154 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2462154 /var/tmp/bdevperf.sock 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2462154 ']' 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:23.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.915 { 00:40:23.915 "params": { 00:40:23.915 "name": "Nvme$subsystem", 00:40:23.915 "trtype": "$TEST_TRANSPORT", 00:40:23.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.915 "adrfam": "ipv4", 00:40:23.915 "trsvcid": "$NVMF_PORT", 00:40:23.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.915 "hdgst": ${hdgst:-false}, 00:40:23.915 "ddgst": ${ddgst:-false} 00:40:23.915 }, 00:40:23.915 "method": "bdev_nvme_attach_controller" 00:40:23.915 } 00:40:23.915 EOF 00:40:23.915 )") 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.915 { 00:40:23.915 "params": { 00:40:23.915 "name": "Nvme$subsystem", 00:40:23.915 "trtype": "$TEST_TRANSPORT", 00:40:23.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.915 "adrfam": "ipv4", 00:40:23.915 "trsvcid": "$NVMF_PORT", 00:40:23.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.915 "hdgst": ${hdgst:-false}, 00:40:23.915 "ddgst": ${ddgst:-false} 00:40:23.915 }, 00:40:23.915 "method": "bdev_nvme_attach_controller" 00:40:23.915 } 00:40:23.915 EOF 00:40:23.915 )") 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.915 { 00:40:23.915 "params": { 00:40:23.915 "name": "Nvme$subsystem", 00:40:23.915 "trtype": "$TEST_TRANSPORT", 00:40:23.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.915 "adrfam": "ipv4", 00:40:23.915 "trsvcid": "$NVMF_PORT", 00:40:23.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.915 "hdgst": ${hdgst:-false}, 00:40:23.915 "ddgst": ${ddgst:-false} 00:40:23.915 }, 00:40:23.915 "method": "bdev_nvme_attach_controller" 00:40:23.915 } 00:40:23.915 EOF 00:40:23.915 )") 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.915 { 00:40:23.915 "params": { 00:40:23.915 "name": "Nvme$subsystem", 00:40:23.915 "trtype": "$TEST_TRANSPORT", 00:40:23.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.915 "adrfam": "ipv4", 00:40:23.915 "trsvcid": "$NVMF_PORT", 00:40:23.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.915 "hdgst": ${hdgst:-false}, 00:40:23.915 "ddgst": ${ddgst:-false} 00:40:23.915 }, 00:40:23.915 "method": "bdev_nvme_attach_controller" 00:40:23.915 } 00:40:23.915 EOF 00:40:23.915 )") 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.915 { 00:40:23.915 "params": { 00:40:23.915 "name": "Nvme$subsystem", 00:40:23.915 "trtype": "$TEST_TRANSPORT", 00:40:23.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.915 "adrfam": "ipv4", 00:40:23.915 "trsvcid": "$NVMF_PORT", 00:40:23.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.915 "hdgst": ${hdgst:-false}, 00:40:23.915 "ddgst": ${ddgst:-false} 00:40:23.915 }, 00:40:23.915 "method": "bdev_nvme_attach_controller" 00:40:23.915 } 00:40:23.915 EOF 00:40:23.915 )") 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.915 { 00:40:23.915 "params": { 00:40:23.915 "name": "Nvme$subsystem", 00:40:23.915 "trtype": "$TEST_TRANSPORT", 00:40:23.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.915 "adrfam": "ipv4", 00:40:23.915 "trsvcid": "$NVMF_PORT", 00:40:23.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.915 "hdgst": ${hdgst:-false}, 00:40:23.915 "ddgst": ${ddgst:-false} 00:40:23.915 }, 00:40:23.915 "method": "bdev_nvme_attach_controller" 00:40:23.915 } 00:40:23.915 EOF 00:40:23.915 )") 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.915 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.915 { 00:40:23.915 "params": { 00:40:23.916 "name": "Nvme$subsystem", 00:40:23.916 "trtype": "$TEST_TRANSPORT", 00:40:23.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "$NVMF_PORT", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.916 "hdgst": ${hdgst:-false}, 00:40:23.916 "ddgst": ${ddgst:-false} 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 } 00:40:23.916 EOF 00:40:23.916 )") 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.916 { 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme$subsystem", 00:40:23.916 "trtype": "$TEST_TRANSPORT", 00:40:23.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "$NVMF_PORT", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.916 "hdgst": ${hdgst:-false}, 00:40:23.916 "ddgst": ${ddgst:-false} 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 } 00:40:23.916 EOF 00:40:23.916 )") 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.916 { 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme$subsystem", 00:40:23.916 "trtype": "$TEST_TRANSPORT", 00:40:23.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "$NVMF_PORT", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.916 "hdgst": ${hdgst:-false}, 00:40:23.916 "ddgst": ${ddgst:-false} 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 } 00:40:23.916 EOF 00:40:23.916 )") 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:23.916 { 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme$subsystem", 00:40:23.916 "trtype": "$TEST_TRANSPORT", 00:40:23.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "$NVMF_PORT", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:23.916 "hdgst": ${hdgst:-false}, 00:40:23.916 "ddgst": ${ddgst:-false} 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 } 00:40:23.916 EOF 00:40:23.916 )") 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:40:23.916 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme1", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme2", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme3", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme4", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme5", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme6", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme7", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme8", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme9", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 },{ 00:40:23.916 "params": { 00:40:23.916 "name": "Nvme10", 00:40:23.916 "trtype": "tcp", 00:40:23.916 "traddr": "10.0.0.2", 00:40:23.916 "adrfam": "ipv4", 00:40:23.916 "trsvcid": "4420", 00:40:23.916 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:40:23.916 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:40:23.916 "hdgst": false, 00:40:23.916 "ddgst": false 00:40:23.916 }, 00:40:23.916 "method": "bdev_nvme_attach_controller" 00:40:23.916 }' 00:40:23.916 [2024-07-23 08:53:36.320160] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:23.916 [2024-07-23 08:53:36.320392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462154 ] 00:40:23.916 EAL: No free 2048 kB hugepages reported on node 1 00:40:24.175 [2024-07-23 08:53:36.478883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.433 [2024-07-23 08:53:36.790929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.965 Running I/O for 10 seconds... 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:40:27.533 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2461717 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2461717 ']' 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2461717 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2461717 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2461717' 00:40:27.801 killing process with pid 2461717 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2461717 00:40:27.801 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2461717 00:40:27.801 [2024-07-23 08:53:40.261056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.261991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.801 [2024-07-23 08:53:40.262251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.262700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.266998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.802 [2024-07-23 08:53:40.267838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.267861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.267889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.271995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.272870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.803 [2024-07-23 08:53:40.277694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.277986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.278926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.283996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.804 [2024-07-23 08:53:40.284378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.804 [2024-07-23 08:53:40.284423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.284472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.284503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.284529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.284555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 08:53:40.284580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.284630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.284654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb880 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-23 08:53:40.284872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same id:0 cdw10:00000000 cdw11:00000000 00:40:27.805 with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.284922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.284952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.284978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.284991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.285002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.285026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.285051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 08:53:40.285075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same [2024-07-23 08:53:40.285102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8400 is with the state(5) to be set 00:40:27.805 same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-23 08:53:40.285201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same id:0 cdw10:00000000 cdw11:00000000 00:40:27.805 with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.285254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.285279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.285304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.805 [2024-07-23 08:53:40.285349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.805 [2024-07-23 08:53:40.285376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.805 [2024-07-23 08:53:40.285392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.285400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.285419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.285444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7c80 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.285554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.285592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.285622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.285649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.285677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.285703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.285730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.285757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.285782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.285869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.285905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.285934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.285961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.285990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.286016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.286045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.806 [2024-07-23 08:53:40.286072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.286097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.287439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.287492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.287522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.287548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.290531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.290587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.290654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.290687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.290721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.290751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.290783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.290811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.290843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.290872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.290904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.290933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.290964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.290992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.291142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.291171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.291204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:40:27.806 [2024-07-23 08:53:40.291232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.806 [2024-07-23 08:53:40.291908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.806 [2024-07-23 08:53:40.291939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.291967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.291999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.292963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.292991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.293959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.293990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.294022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.294051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.294082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.294110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.294142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.807 [2024-07-23 08:53:40.294176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.807 [2024-07-23 08:53:40.294209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.294238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.294270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.294299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.294341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.294371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.294402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.294431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.294462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.294490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.294628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-23 08:53:40.294906] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001fe080 was disconnected awith the state(5) to be set 00:40:27.808 nd freed. reset controller. 00:40:27.808 [2024-07-23 08:53:40.294943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.294967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 08:53:40.295099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 08:53:40.295296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-23 08:53:40.295465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1with the state(5) to be set 00:40:27.808 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:1[2024-07-23 08:53:40.295591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-23 08:53:40.295620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:40:27.808 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 [2024-07-23 08:53:40.295668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128[2024-07-23 08:53:40.295716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.808 with the state(5) to be set 00:40:27.808 [2024-07-23 08:53:40.295745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same [2024-07-23 08:53:40.295745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:40:27.808 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.808 [2024-07-23 08:53:40.295775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.295801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.295824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.295848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 08:53:40.295873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.295923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.295948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.295971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.295995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 08:53:40.295996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128[2024-07-23 08:53:40.296094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 08:53:40.296127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b880 is same with the state(5) to be set 00:40:27.809 [2024-07-23 08:53:40.296319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.296968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.296996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.809 [2024-07-23 08:53:40.297560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.809 [2024-07-23 08:53:40.297591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.297619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.297651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.297679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.297710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.297738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.297769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.297797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.297829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.297858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.297889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.297917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.297948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.297976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.298968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.298998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.810 [2024-07-23 08:53:40.299026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.810 [2024-07-23 08:53:40.299418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299460] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001fe300 was disconnected a[2024-07-23 08:53:40.299465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same nd freed. reset controller. 00:40:27.810 with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.810 [2024-07-23 08:53:40.299863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.299887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.299912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.299935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.299959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.299983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-23 08:53:40.300021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fb880 with the state(5) to be set 00:40:27.811 (9): Bad file descriptor 00:40:27.811 [2024-07-23 08:53:40.300066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-23 08:53:40.300162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same id:0 cdw10:00000000 cdw11:00000000 00:40:27.811 with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-23 08:53:40.300191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(5) to be set 00:40:27.811 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-23 08:53:40.300294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 id:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 08:53:40.300611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-23 08:53:40.300835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same id:0 cdw10:00000000 cdw11:00000000 00:40:27.811 with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.300912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.300937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-23 08:53:40.300962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same id:0 cdw10:00000000 cdw11:00000000 00:40:27.811 with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.300990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 08:53:40.300991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.301018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.301021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.811 [2024-07-23 08:53:40.301047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.811 [2024-07-23 08:53:40.301071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:40:27.811 [2024-07-23 08:53:40.301116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f8400 (9): Bad file descriptor 00:40:27.811 [2024-07-23 08:53:40.301175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7c80 (9): Bad file descriptor 00:40:27.811 [2024-07-23 08:53:40.301233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:40:27.811 [2024-07-23 08:53:40.301287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f8b80 (9): Bad file descriptor 00:40:27.811 [2024-07-23 08:53:40.301394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.812 [2024-07-23 08:53:40.301431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.812 [2024-07-23 08:53:40.301460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.812 [2024-07-23 08:53:40.301487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.812 [2024-07-23 08:53:40.301514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.812 [2024-07-23 08:53:40.301549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.812 [2024-07-23 08:53:40.301577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.812 [2024-07-23 08:53:40.301604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.812 [2024-07-23 08:53:40.301634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.302916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.302962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.302989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.303983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.304477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:40:27.812 [2024-07-23 08:53:40.305787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:40:27.812 [2024-07-23 08:53:40.305845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:40:27.812 [2024-07-23 08:53:40.305889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f9a80 (9): Bad file descriptor 00:40:27.812 [2024-07-23 08:53:40.305932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f9300 (9): Bad file descriptor 00:40:27.812 [2024-07-23 08:53:40.306851] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:27.812 [2024-07-23 08:53:40.307060] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:27.812 [2024-07-23 08:53:40.307189] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:27.813 [2024-07-23 08:53:40.307320] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:27.813 [2024-07-23 08:53:40.307442] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:27.813 [2024-07-23 08:53:40.309735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:27.813 [2024-07-23 08:53:40.309838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f9300 with addr=10.0.0.2, port=4420 00:40:27.813 [2024-07-23 08:53:40.309872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:40:27.813 [2024-07-23 08:53:40.310161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:27.813 [2024-07-23 08:53:40.310206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f9a80 with addr=10.0.0.2, port=4420 00:40:27.813 [2024-07-23 08:53:40.310235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:40:27.813 [2024-07-23 08:53:40.310541] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:27.813 [2024-07-23 08:53:40.310824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f9300 (9): Bad file descriptor 00:40:27.813 [2024-07-23 08:53:40.310877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f9a80 (9): Bad file descriptor 00:40:27.813 [2024-07-23 08:53:40.310973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fa980 (9): Bad file descriptor 00:40:27.813 [2024-07-23 08:53:40.311092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fa200 (9): Bad file descriptor 00:40:27.813 [2024-07-23 08:53:40.311198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.813 [2024-07-23 08:53:40.311239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.311272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.813 [2024-07-23 08:53:40.311301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.311343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.813 [2024-07-23 08:53:40.311372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.311400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:27.813 [2024-07-23 08:53:40.311427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.311451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(5) to be set 00:40:27.813 [2024-07-23 08:53:40.311710] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:27.813 [2024-07-23 08:53:40.311946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:40:27.813 [2024-07-23 08:53:40.311988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:40:27.813 [2024-07-23 08:53:40.312022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:40:27.813 [2024-07-23 08:53:40.312071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:40:27.813 [2024-07-23 08:53:40.312100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:40:27.813 [2024-07-23 08:53:40.312125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:40:27.813 [2024-07-23 08:53:40.312238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.312945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.312972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.813 [2024-07-23 08:53:40.313735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:27.813 [2024-07-23 08:53:40.313766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.084 [2024-07-23 08:53:40.313795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.313827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.313855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.313886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.313914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.313945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.313973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.314943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.314972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.315949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.085 [2024-07-23 08:53:40.315979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.085 [2024-07-23 08:53:40.316007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.316038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.316066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.316097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.316125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.316156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.316184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.316213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fd680 is same with the state(5) to be set 00:40:28.086 [2024-07-23 08:53:40.318405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.318949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.318980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.319967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.319994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.086 [2024-07-23 08:53:40.320488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.086 [2024-07-23 08:53:40.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.320966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.320996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.321961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.321989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.322020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.322047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.322078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.322106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.322137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.322165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.322195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.322223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.322255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.322282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.322318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fd900 is same with the state(5) to be set 00:40:28.087 [2024-07-23 08:53:40.324529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.324574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.324627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.324690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.324719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.324751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.324779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.324810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.324837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.324869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.324897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.324928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.324957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.087 [2024-07-23 08:53:40.324988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.087 [2024-07-23 08:53:40.325016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.325951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.325981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.326942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.326975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.327008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.327036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.327067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.327095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.327126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.327154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.327185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.327214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.327244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.327272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.327304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.088 [2024-07-23 08:53:40.327341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.088 [2024-07-23 08:53:40.327373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.327971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.327998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.328483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.328513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fdb80 is same with the state(5) to be set 00:40:28.089 [2024-07-23 08:53:40.330729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.330773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.330818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.330848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.330879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.330907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.089 [2024-07-23 08:53:40.330939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.089 [2024-07-23 08:53:40.330966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.330997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.331954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.331985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.332943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.332976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.333008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.333036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.333067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.333094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.333124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.333152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.333182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.333210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.090 [2024-07-23 08:53:40.333240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.090 [2024-07-23 08:53:40.333268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.333969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.333997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.334599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.334627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fde00 is same with the state(5) to be set 00:40:28.091 [2024-07-23 08:53:40.336878] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:40:28.091 [2024-07-23 08:53:40.337017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.091 [2024-07-23 08:53:40.337804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.091 [2024-07-23 08:53:40.337855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.337885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.337916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.337943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.337974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.338944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.338971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.339968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.339995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.340026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.340053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.340084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.092 [2024-07-23 08:53:40.340112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.092 [2024-07-23 08:53:40.340144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.340897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.340925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fed00 is same with the state(5) to be set 00:40:28.093 [2024-07-23 08:53:40.347515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.093 [2024-07-23 08:53:40.347566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.093 [2024-07-23 08:53:40.347599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:28.093 [2024-07-23 08:53:40.347655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:40:28.093 [2024-07-23 08:53:40.347693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:40:28.093 [2024-07-23 08:53:40.347883] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.093 [2024-07-23 08:53:40.347956] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.093 [2024-07-23 08:53:40.348020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fb100 (9): Bad file descriptor 00:40:28.093 [2024-07-23 08:53:40.348300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:40:28.093 [2024-07-23 08:53:40.348359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:40:28.093 [2024-07-23 08:53:40.348729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.093 [2024-07-23 08:53:40.348781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:40:28.093 [2024-07-23 08:53:40.348813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:40:28.093 [2024-07-23 08:53:40.349083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.093 [2024-07-23 08:53:40.349129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7c80 with addr=10.0.0.2, port=4420 00:40:28.093 [2024-07-23 08:53:40.349159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7c80 is same with the state(5) to be set 00:40:28.093 [2024-07-23 08:53:40.349433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.093 [2024-07-23 08:53:40.349481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f8400 with addr=10.0.0.2, port=4420 00:40:28.093 [2024-07-23 08:53:40.349518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8400 is same with the state(5) to be set 00:40:28.093 [2024-07-23 08:53:40.352209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.352954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.352992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.353022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.353053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.353081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.353112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.353141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.353173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.353201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.093 [2024-07-23 08:53:40.353232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.093 [2024-07-23 08:53:40.353259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.353965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.353993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.354966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.354998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.355026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.355056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.355084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.355117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.094 [2024-07-23 08:53:40.355145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.094 [2024-07-23 08:53:40.355176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.355971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.355999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.356029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.356063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.356096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.356124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.356152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fe580 is same with the state(5) to be set 00:40:28.095 [2024-07-23 08:53:40.358273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.358946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.358977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.095 [2024-07-23 08:53:40.359669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.095 [2024-07-23 08:53:40.359699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.359727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.359763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.359793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.359823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.359851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.359881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.359909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.359940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.359968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.359998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.360952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.360980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.096 [2024-07-23 08:53:40.361928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.096 [2024-07-23 08:53:40.361959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.361991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.362024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.362052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.362082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.362110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.362140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.362167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.362195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fe800 is same with the state(5) to be set 00:40:28.097 [2024-07-23 08:53:40.364844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:40:28.097 [2024-07-23 08:53:40.364895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:40:28.097 [2024-07-23 08:53:40.364931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:40:28.097 [2024-07-23 08:53:40.364963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:40:28.097 [2024-07-23 08:53:40.365330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.097 [2024-07-23 08:53:40.365380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f8b80 with addr=10.0.0.2, port=4420 00:40:28.097 [2024-07-23 08:53:40.365412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:40:28.097 [2024-07-23 08:53:40.365680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.097 [2024-07-23 08:53:40.365726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fb880 with addr=10.0.0.2, port=4420 00:40:28.097 [2024-07-23 08:53:40.365756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb880 is same with the state(5) to be set 00:40:28.097 [2024-07-23 08:53:40.365792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:40:28.097 [2024-07-23 08:53:40.365831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7c80 (9): Bad file descriptor 00:40:28.097 [2024-07-23 08:53:40.365869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f8400 (9): Bad file descriptor 00:40:28.097 [2024-07-23 08:53:40.365955] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.097 [2024-07-23 08:53:40.366002] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.097 [2024-07-23 08:53:40.366039] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.097 [2024-07-23 08:53:40.366080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fb880 (9): Bad file descriptor 00:40:28.097 [2024-07-23 08:53:40.366126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f8b80 (9): Bad file descriptor 00:40:28.097 [2024-07-23 08:53:40.366637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.097 [2024-07-23 08:53:40.366686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f9a80 with addr=10.0.0.2, port=4420 00:40:28.097 [2024-07-23 08:53:40.366727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:40:28.097 [2024-07-23 08:53:40.366946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.097 [2024-07-23 08:53:40.366992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f9300 with addr=10.0.0.2, port=4420 00:40:28.097 [2024-07-23 08:53:40.367023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:40:28.097 [2024-07-23 08:53:40.367257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.097 [2024-07-23 08:53:40.367303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fa200 with addr=10.0.0.2, port=4420 00:40:28.097 [2024-07-23 08:53:40.367346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(5) to be set 00:40:28.097 [2024-07-23 08:53:40.367613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.097 [2024-07-23 08:53:40.367657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fa980 with addr=10.0.0.2, port=4420 00:40:28.097 [2024-07-23 08:53:40.367686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(5) to be set 00:40:28.097 [2024-07-23 08:53:40.367721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:28.097 [2024-07-23 08:53:40.367748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:28.097 [2024-07-23 08:53:40.367779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:28.097 [2024-07-23 08:53:40.367819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:40:28.097 [2024-07-23 08:53:40.367846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:40:28.097 [2024-07-23 08:53:40.367870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:40:28.097 [2024-07-23 08:53:40.367905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:40:28.097 [2024-07-23 08:53:40.367932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:40:28.097 [2024-07-23 08:53:40.367956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:40:28.097 [2024-07-23 08:53:40.369334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.369961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.369989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.370021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.370049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.370081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.370109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.370140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.370168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.370200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.097 [2024-07-23 08:53:40.370228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.097 [2024-07-23 08:53:40.370259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.370956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.370986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.371949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.371976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.098 [2024-07-23 08:53:40.372510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.098 [2024-07-23 08:53:40.372538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.372597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.372721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.372779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.372838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.372896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.372954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.372986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.373014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.373045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.373072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.373103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.373131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.373162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:28.099 [2024-07-23 08:53:40.373189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:28.099 [2024-07-23 08:53:40.373217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fea80 is same with the state(5) to be set 00:40:28.099 [2024-07-23 08:53:40.379396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.099 [2024-07-23 08:53:40.379439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.099 [2024-07-23 08:53:40.379464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.099 task offset: 17024 on job bdev=Nvme5n1 fails 00:40:28.099 00:40:28.099 Latency(us) 00:40:28.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:28.099 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme1n1 ended in about 1.24 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme1n1 : 1.24 103.12 6.44 51.56 0.00 408957.79 28932.93 407002.83 00:40:28.099 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme2n1 ended in about 1.25 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme2n1 : 1.25 102.61 6.41 51.31 0.00 402096.23 28544.57 407002.83 00:40:28.099 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme3n1 ended in about 1.25 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme3n1 : 1.25 106.10 6.63 51.05 0.00 385277.26 28544.57 386808.04 00:40:28.099 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme4n1 ended in about 1.26 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme4n1 : 1.26 101.62 6.35 50.81 0.00 388398.90 29903.83 410109.72 00:40:28.099 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme5n1 ended in about 1.23 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme5n1 : 1.23 104.30 6.52 52.15 0.00 368648.60 14078.10 444285.53 00:40:28.099 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme6n1 ended in about 1.23 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme6n1 : 1.23 104.17 6.51 52.08 0.00 360339.28 14854.83 419430.40 00:40:28.099 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme7n1 ended in about 1.28 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme7n1 : 1.28 99.91 6.24 49.96 0.00 368932.28 32816.55 410109.72 00:40:28.099 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme8n1 ended in about 1.29 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme8n1 : 1.29 99.44 6.22 49.72 0.00 362191.58 27962.03 416323.51 00:40:28.099 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme9n1 ended in about 1.30 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme9n1 : 1.30 98.60 6.16 49.30 0.00 357040.23 38253.61 410109.72 00:40:28.099 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:28.099 Job: Nvme10n1 ended in about 1.27 seconds with error 00:40:28.099 Verification LBA range: start 0x0 length 0x400 00:40:28.099 Nvme10n1 : 1.27 50.55 3.16 50.55 0.00 506440.06 32622.36 456713.10 00:40:28.099 =================================================================================================================== 00:40:28.099 Total : 970.42 60.65 508.49 0.00 386841.53 14078.10 456713.10 00:40:28.099 [2024-07-23 08:53:40.490428] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:28.099 [2024-07-23 08:53:40.490554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:40:28.099 [2024-07-23 08:53:40.490705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f9a80 (9): Bad file descriptor 00:40:28.099 [2024-07-23 08:53:40.490763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f9300 (9): Bad file descriptor 00:40:28.099 [2024-07-23 08:53:40.490803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fa200 (9): Bad file descriptor 00:40:28.099 [2024-07-23 08:53:40.490840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fa980 (9): Bad file descriptor 00:40:28.099 [2024-07-23 08:53:40.490875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:40:28.099 [2024-07-23 08:53:40.490902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:40:28.099 [2024-07-23 08:53:40.490943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:40:28.099 [2024-07-23 08:53:40.491000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:40:28.099 [2024-07-23 08:53:40.491027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:40:28.099 [2024-07-23 08:53:40.491052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:40:28.099 [2024-07-23 08:53:40.491196] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.099 [2024-07-23 08:53:40.491243] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.491280] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.491327] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.491365] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.491401] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.491672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.491711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.492151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.100 [2024-07-23 08:53:40.492207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fb100 with addr=10.0.0.2, port=4420 00:40:28.100 [2024-07-23 08:53:40.492243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(5) to be set 00:40:28.100 [2024-07-23 08:53:40.492273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.492297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.492342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:40:28.100 [2024-07-23 08:53:40.492383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.492410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.492435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:40:28.100 [2024-07-23 08:53:40.492470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.492496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.492520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:40:28.100 [2024-07-23 08:53:40.492554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.492580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.492603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:40:28.100 [2024-07-23 08:53:40.492667] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.492708] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.492743] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.492804] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.492842] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.492877] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.492911] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:28.100 [2024-07-23 08:53:40.493706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:40:28.100 [2024-07-23 08:53:40.493757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:40:28.100 [2024-07-23 08:53:40.493791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:28.100 [2024-07-23 08:53:40.493865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.493896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.493919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.493941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.494027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fb100 (9): Bad file descriptor 00:40:28.100 [2024-07-23 08:53:40.494437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.100 [2024-07-23 08:53:40.494488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f8400 with addr=10.0.0.2, port=4420 00:40:28.100 [2024-07-23 08:53:40.494520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8400 is same with the state(5) to be set 00:40:28.100 [2024-07-23 08:53:40.494751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.100 [2024-07-23 08:53:40.494797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7c80 with addr=10.0.0.2, port=4420 00:40:28.100 [2024-07-23 08:53:40.494828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7c80 is same with the state(5) to be set 00:40:28.100 [2024-07-23 08:53:40.495096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.100 [2024-07-23 08:53:40.495141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:40:28.100 [2024-07-23 08:53:40.495171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:40:28.100 [2024-07-23 08:53:40.495199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.495223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.495248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:40:28.100 [2024-07-23 08:53:40.495339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:40:28.100 [2024-07-23 08:53:40.495382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:40:28.100 [2024-07-23 08:53:40.495437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.495520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f8400 (9): Bad file descriptor 00:40:28.100 [2024-07-23 08:53:40.495568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7c80 (9): Bad file descriptor 00:40:28.100 [2024-07-23 08:53:40.495606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:40:28.100 [2024-07-23 08:53:40.495886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.100 [2024-07-23 08:53:40.495940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001fb880 with addr=10.0.0.2, port=4420 00:40:28.100 [2024-07-23 08:53:40.495971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb880 is same with the state(5) to be set 00:40:28.100 [2024-07-23 08:53:40.496136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:28.100 [2024-07-23 08:53:40.496181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f8b80 with addr=10.0.0.2, port=4420 00:40:28.100 [2024-07-23 08:53:40.496210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:40:28.100 [2024-07-23 08:53:40.496238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.496262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.496287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:40:28.100 [2024-07-23 08:53:40.496332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.496361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.496386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:40:28.100 [2024-07-23 08:53:40.496420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.496446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.496470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:28.100 [2024-07-23 08:53:40.496571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.496606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.496630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.496662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001fb880 (9): Bad file descriptor 00:40:28.100 [2024-07-23 08:53:40.496701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f8b80 (9): Bad file descriptor 00:40:28.100 [2024-07-23 08:53:40.496789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.496839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.496865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:40:28.100 [2024-07-23 08:53:40.496901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:40:28.100 [2024-07-23 08:53:40.496928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:40:28.100 [2024-07-23 08:53:40.496952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:40:28.100 [2024-07-23 08:53:40.497040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:28.100 [2024-07-23 08:53:40.497074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:32.335 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:40:32.335 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2462154 00:40:32.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2462154) - No such process 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:32.904 rmmod nvme_tcp 00:40:32.904 rmmod nvme_fabrics 00:40:32.904 rmmod nvme_keyring 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.904 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:35.445 00:40:35.445 real 0m14.458s 00:40:35.445 user 0m45.000s 00:40:35.445 sys 0m2.697s 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:40:35.445 ************************************ 00:40:35.445 END TEST nvmf_shutdown_tc3 00:40:35.445 ************************************ 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:40:35.445 00:40:35.445 real 0m53.516s 00:40:35.445 user 2m55.332s 00:40:35.445 sys 0m11.320s 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:35.445 ************************************ 00:40:35.445 END TEST nvmf_shutdown 00:40:35.445 ************************************ 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:40:35.445 00:40:35.445 real 21m8.853s 00:40:35.445 user 57m0.119s 00:40:35.445 sys 4m13.899s 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:35.445 08:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:40:35.445 ************************************ 00:40:35.445 END TEST nvmf_target_extra 00:40:35.445 ************************************ 00:40:35.445 08:53:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:40:35.445 08:53:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:40:35.445 08:53:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:35.445 08:53:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:35.445 08:53:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:35.445 ************************************ 00:40:35.445 START TEST nvmf_host 00:40:35.445 ************************************ 00:40:35.445 08:53:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:40:35.445 * Looking for test storage... 00:40:35.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.446 ************************************ 00:40:35.446 START TEST nvmf_multicontroller 00:40:35.446 ************************************ 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:40:35.446 * Looking for test storage... 00:40:35.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:35.446 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:40:35.447 08:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:40:38.742 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:38.743 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:38.743 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:38.743 Found net devices under 0000:84:00.0: cvl_0_0 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:38.743 Found net devices under 0000:84:00.1: cvl_0_1 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:38.743 08:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:38.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:38.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:40:38.743 00:40:38.743 --- 10.0.0.2 ping statistics --- 00:40:38.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.743 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:38.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:38.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:40:38.743 00:40:38.743 --- 10.0.0.1 ping statistics --- 00:40:38.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.743 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:38.743 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2465235 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2465235 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2465235 ']' 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:38.744 08:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:39.003 [2024-07-23 08:53:51.420125] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:39.003 [2024-07-23 08:53:51.420455] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:39.263 EAL: No free 2048 kB hugepages reported on node 1 00:40:39.263 [2024-07-23 08:53:51.702663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:39.522 [2024-07-23 08:53:52.022098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:39.522 [2024-07-23 08:53:52.022180] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:39.522 [2024-07-23 08:53:52.022220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:39.522 [2024-07-23 08:53:52.022246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:39.522 [2024-07-23 08:53:52.022271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:39.522 [2024-07-23 08:53:52.022431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:39.522 [2024-07-23 08:53:52.022501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:39.522 [2024-07-23 08:53:52.022511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.461 [2024-07-23 08:53:52.930099] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.461 08:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.721 Malloc0 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.721 [2024-07-23 08:53:53.077038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.721 [2024-07-23 08:53:53.084911] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.721 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.722 Malloc1 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2465396 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2465396 /var/tmp/bdevperf.sock 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2465396 ']' 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:40.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:40.722 08:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.630 08:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:42.630 08:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:40:42.630 08:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:40:42.630 08:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.630 08:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.630 NVMe0n1 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:42.630 1 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.630 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.630 request: 00:40:42.630 { 00:40:42.630 "name": "NVMe0", 00:40:42.631 "trtype": "tcp", 00:40:42.631 "traddr": "10.0.0.2", 00:40:42.631 "adrfam": "ipv4", 00:40:42.631 "trsvcid": "4420", 00:40:42.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.631 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:40:42.631 "hostaddr": "10.0.0.2", 00:40:42.631 "hostsvcid": "60000", 00:40:42.631 "prchk_reftag": false, 00:40:42.631 "prchk_guard": false, 00:40:42.631 "hdgst": false, 00:40:42.631 "ddgst": false, 00:40:42.631 "method": "bdev_nvme_attach_controller", 00:40:42.631 "req_id": 1 00:40:42.631 } 00:40:42.631 Got JSON-RPC error response 00:40:42.631 response: 00:40:42.631 { 00:40:42.631 "code": -114, 00:40:42.631 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:40:42.631 } 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.631 request: 00:40:42.631 { 00:40:42.631 "name": "NVMe0", 00:40:42.631 "trtype": "tcp", 00:40:42.631 "traddr": "10.0.0.2", 00:40:42.631 "adrfam": "ipv4", 00:40:42.631 "trsvcid": "4420", 00:40:42.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:42.631 "hostaddr": "10.0.0.2", 00:40:42.631 "hostsvcid": "60000", 00:40:42.631 "prchk_reftag": false, 00:40:42.631 "prchk_guard": false, 00:40:42.631 "hdgst": false, 00:40:42.631 "ddgst": false, 00:40:42.631 "method": "bdev_nvme_attach_controller", 00:40:42.631 "req_id": 1 00:40:42.631 } 00:40:42.631 Got JSON-RPC error response 00:40:42.631 response: 00:40:42.631 { 00:40:42.631 "code": -114, 00:40:42.631 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:40:42.631 } 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.631 request: 00:40:42.631 { 00:40:42.631 "name": "NVMe0", 00:40:42.631 "trtype": "tcp", 00:40:42.631 "traddr": "10.0.0.2", 00:40:42.631 "adrfam": "ipv4", 00:40:42.631 "trsvcid": "4420", 00:40:42.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.631 "hostaddr": "10.0.0.2", 00:40:42.631 "hostsvcid": "60000", 00:40:42.631 "prchk_reftag": false, 00:40:42.631 "prchk_guard": false, 00:40:42.631 "hdgst": false, 00:40:42.631 "ddgst": false, 00:40:42.631 "multipath": "disable", 00:40:42.631 "method": "bdev_nvme_attach_controller", 00:40:42.631 "req_id": 1 00:40:42.631 } 00:40:42.631 Got JSON-RPC error response 00:40:42.631 response: 00:40:42.631 { 00:40:42.631 "code": -114, 00:40:42.631 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:40:42.631 } 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.631 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.631 request: 00:40:42.631 { 00:40:42.631 "name": "NVMe0", 00:40:42.631 "trtype": "tcp", 00:40:42.631 "traddr": "10.0.0.2", 00:40:42.631 "adrfam": "ipv4", 00:40:42.631 "trsvcid": "4420", 00:40:42.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.631 "hostaddr": "10.0.0.2", 00:40:42.631 "hostsvcid": "60000", 00:40:42.631 "prchk_reftag": false, 00:40:42.631 "prchk_guard": false, 00:40:42.631 "hdgst": false, 00:40:42.631 "ddgst": false, 00:40:42.631 "multipath": "failover", 00:40:42.891 "method": "bdev_nvme_attach_controller", 00:40:42.891 "req_id": 1 00:40:42.891 } 00:40:42.891 Got JSON-RPC error response 00:40:42.891 response: 00:40:42.891 { 00:40:42.891 "code": -114, 00:40:42.891 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:40:42.891 } 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.891 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:42.891 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:43.151 00:40:43.151 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:43.151 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:43.152 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:43.152 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:43.152 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:40:43.152 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:43.152 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:40:43.152 08:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:44.092 0 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2465396 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2465396 ']' 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2465396 00:40:44.092 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:40:44.352 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:44.352 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2465396 00:40:44.352 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:44.352 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:44.352 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2465396' 00:40:44.352 killing process with pid 2465396 00:40:44.352 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2465396 00:40:44.352 08:53:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2465396 00:40:45.734 08:53:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:45.734 08:53:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:45.734 08:53:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:40:45.734 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:40:45.735 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:40:45.735 [2024-07-23 08:53:53.404423] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:45.735 [2024-07-23 08:53:53.404781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465396 ] 00:40:45.735 EAL: No free 2048 kB hugepages reported on node 1 00:40:45.735 [2024-07-23 08:53:53.648689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.735 [2024-07-23 08:53:53.961663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.735 [2024-07-23 08:53:55.416975] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 84851e12-3e3e-4adb-81a1-dd4f427f9f50 already exists 00:40:45.735 [2024-07-23 08:53:55.417057] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:84851e12-3e3e-4adb-81a1-dd4f427f9f50 alias for bdev NVMe1n1 00:40:45.735 [2024-07-23 08:53:55.417091] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:40:45.735 Running I/O for 1 seconds... 00:40:45.735 00:40:45.735 Latency(us) 00:40:45.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:45.735 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:40:45.735 NVMe0n1 : 1.01 10071.25 39.34 0.00 0.00 12684.94 7718.68 23981.32 00:40:45.735 =================================================================================================================== 00:40:45.735 Total : 10071.25 39.34 0.00 0.00 12684.94 7718.68 23981.32 00:40:45.735 Received shutdown signal, test time was about 1.000000 seconds 00:40:45.735 00:40:45.735 Latency(us) 00:40:45.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:45.735 =================================================================================================================== 00:40:45.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:45.735 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:45.735 rmmod nvme_tcp 00:40:45.735 rmmod nvme_fabrics 00:40:45.735 rmmod nvme_keyring 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2465235 ']' 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2465235 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2465235 ']' 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2465235 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2465235 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2465235' 00:40:45.735 killing process with pid 2465235 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2465235 00:40:45.735 08:53:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2465235 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:48.275 08:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.304 08:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:50.304 00:40:50.304 real 0m14.620s 00:40:50.304 user 0m30.591s 00:40:50.304 sys 0m4.290s 00:40:50.304 08:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:50.304 08:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:40:50.304 ************************************ 00:40:50.304 END TEST nvmf_multicontroller 00:40:50.304 ************************************ 00:40:50.304 08:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:40:50.304 08:54:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.305 ************************************ 00:40:50.305 START TEST nvmf_aer 00:40:50.305 ************************************ 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:40:50.305 * Looking for test storage... 00:40:50.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:40:50.305 08:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:53.636 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:53.636 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:53.636 Found net devices under 0000:84:00.0: cvl_0_0 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:53.636 Found net devices under 0000:84:00.1: cvl_0_1 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:53.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:53.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:40:53.636 00:40:53.636 --- 10.0.0.2 ping statistics --- 00:40:53.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:53.636 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:53.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:53.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:40:53.636 00:40:53.636 --- 10.0.0.1 ping statistics --- 00:40:53.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:53.636 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:53.636 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2468371 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2468371 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2468371 ']' 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:53.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:53.637 08:54:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:53.637 [2024-07-23 08:54:06.112262] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:53.637 [2024-07-23 08:54:06.112596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:53.896 EAL: No free 2048 kB hugepages reported on node 1 00:40:54.156 [2024-07-23 08:54:06.418732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:54.415 [2024-07-23 08:54:06.899647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:54.415 [2024-07-23 08:54:06.899767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:54.415 [2024-07-23 08:54:06.899830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:54.415 [2024-07-23 08:54:06.899878] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:54.415 [2024-07-23 08:54:06.899927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:54.415 [2024-07-23 08:54:06.900151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:54.415 [2024-07-23 08:54:06.900215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:54.415 [2024-07-23 08:54:06.900266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.415 [2024-07-23 08:54:06.900278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:54.985 [2024-07-23 08:54:07.419643] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:54.985 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:55.246 Malloc0 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:55.246 [2024-07-23 08:54:07.551842] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:55.246 [ 00:40:55.246 { 00:40:55.246 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:55.246 "subtype": "Discovery", 00:40:55.246 "listen_addresses": [], 00:40:55.246 "allow_any_host": true, 00:40:55.246 "hosts": [] 00:40:55.246 }, 00:40:55.246 { 00:40:55.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:55.246 "subtype": "NVMe", 00:40:55.246 "listen_addresses": [ 00:40:55.246 { 00:40:55.246 "trtype": "TCP", 00:40:55.246 "adrfam": "IPv4", 00:40:55.246 "traddr": "10.0.0.2", 00:40:55.246 "trsvcid": "4420" 00:40:55.246 } 00:40:55.246 ], 00:40:55.246 "allow_any_host": true, 00:40:55.246 "hosts": [], 00:40:55.246 "serial_number": "SPDK00000000000001", 00:40:55.246 "model_number": "SPDK bdev Controller", 00:40:55.246 "max_namespaces": 2, 00:40:55.246 "min_cntlid": 1, 00:40:55.246 "max_cntlid": 65519, 00:40:55.246 "namespaces": [ 00:40:55.246 { 00:40:55.246 "nsid": 1, 00:40:55.246 "bdev_name": "Malloc0", 00:40:55.246 "name": "Malloc0", 00:40:55.246 "nguid": "ADFF81F99AF84C7B90139FE20B94B65B", 00:40:55.246 "uuid": "adff81f9-9af8-4c7b-9013-9fe20b94b65b" 00:40:55.246 } 00:40:55.246 ] 00:40:55.246 } 00:40:55.246 ] 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2468534 00:40:55.246 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:40:55.247 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:40:55.507 EAL: No free 2048 kB hugepages reported on node 1 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 4 -lt 200 ']' 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=5 00:40:55.507 08:54:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:40:55.767 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:55.767 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:40:55.767 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:40:55.767 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:40:55.767 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:55.767 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:56.026 Malloc1 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:56.026 [ 00:40:56.026 { 00:40:56.026 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:56.026 "subtype": "Discovery", 00:40:56.026 "listen_addresses": [], 00:40:56.026 "allow_any_host": true, 00:40:56.026 "hosts": [] 00:40:56.026 }, 00:40:56.026 { 00:40:56.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:56.026 "subtype": "NVMe", 00:40:56.026 "listen_addresses": [ 00:40:56.026 { 00:40:56.026 "trtype": "TCP", 00:40:56.026 "adrfam": "IPv4", 00:40:56.026 "traddr": "10.0.0.2", 00:40:56.026 "trsvcid": "4420" 00:40:56.026 } 00:40:56.026 ], 00:40:56.026 "allow_any_host": true, 00:40:56.026 "hosts": [], 00:40:56.026 "serial_number": "SPDK00000000000001", 00:40:56.026 "model_number": "SPDK bdev Controller", 00:40:56.026 "max_namespaces": 2, 00:40:56.026 "min_cntlid": 1, 00:40:56.026 "max_cntlid": 65519, 00:40:56.026 "namespaces": [ 00:40:56.026 { 00:40:56.026 "nsid": 1, 00:40:56.026 "bdev_name": "Malloc0", 00:40:56.026 "name": "Malloc0", 00:40:56.026 "nguid": "ADFF81F99AF84C7B90139FE20B94B65B", 00:40:56.026 "uuid": "adff81f9-9af8-4c7b-9013-9fe20b94b65b" 00:40:56.026 }, 00:40:56.026 { 00:40:56.026 "nsid": 2, 00:40:56.026 "bdev_name": "Malloc1", 00:40:56.026 "name": "Malloc1", 00:40:56.026 "nguid": "00B24C86353D406EA4F8339933259308", 00:40:56.026 "uuid": "00b24c86-353d-406e-a4f8-339933259308" 00:40:56.026 } 00:40:56.026 ] 00:40:56.026 } 00:40:56.026 ] 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:56.026 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2468534 00:40:56.026 Asynchronous Event Request test 00:40:56.026 Attaching to 10.0.0.2 00:40:56.026 Attached to 10.0.0.2 00:40:56.026 Registering asynchronous event callbacks... 00:40:56.026 Starting namespace attribute notice tests for all controllers... 00:40:56.027 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:40:56.027 aer_cb - Changed Namespace 00:40:56.027 Cleaning up... 00:40:56.027 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:40:56.027 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:56.027 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:56.284 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:56.284 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:40:56.284 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:56.284 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:56.543 rmmod nvme_tcp 00:40:56.543 rmmod nvme_fabrics 00:40:56.543 rmmod nvme_keyring 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2468371 ']' 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2468371 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2468371 ']' 00:40:56.543 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2468371 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2468371 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2468371' 00:40:56.544 killing process with pid 2468371 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2468371 00:40:56.544 08:54:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2468371 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.452 08:54:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:01.001 00:41:01.001 real 0m10.550s 00:41:01.001 user 0m14.484s 00:41:01.001 sys 0m3.708s 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:41:01.001 ************************************ 00:41:01.001 END TEST nvmf_aer 00:41:01.001 ************************************ 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:01.001 08:54:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.001 ************************************ 00:41:01.001 START TEST nvmf_async_init 00:41:01.001 ************************************ 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:41:01.001 * Looking for test storage... 00:41:01.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ac3636f9d2b6431593ef66eec04b7a5a 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.001 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:01.002 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:01.002 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:41:01.002 08:54:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:04.303 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:04.303 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.303 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:04.304 Found net devices under 0000:84:00.0: cvl_0_0 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:04.304 Found net devices under 0000:84:00.1: cvl_0_1 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:04.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:04.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:41:04.304 00:41:04.304 --- 10.0.0.2 ping statistics --- 00:41:04.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.304 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:04.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:04.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:41:04.304 00:41:04.304 --- 10.0.0.1 ping statistics --- 00:41:04.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.304 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2471385 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2471385 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2471385 ']' 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:04.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:04.304 08:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:04.304 [2024-07-23 08:54:16.502061] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:04.304 [2024-07-23 08:54:16.502229] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:04.304 EAL: No free 2048 kB hugepages reported on node 1 00:41:04.304 [2024-07-23 08:54:16.675055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.874 [2024-07-23 08:54:17.159573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:04.874 [2024-07-23 08:54:17.159707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:04.874 [2024-07-23 08:54:17.159767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:04.874 [2024-07-23 08:54:17.159821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:04.874 [2024-07-23 08:54:17.159867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:04.874 [2024-07-23 08:54:17.159977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.445 [2024-07-23 08:54:17.902556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.445 null0 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ac3636f9d2b6431593ef66eec04b7a5a 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.445 [2024-07-23 08:54:17.950800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.445 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:41:05.446 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.446 08:54:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.706 nvme0n1 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.706 [ 00:41:05.706 { 00:41:05.706 "name": "nvme0n1", 00:41:05.706 "aliases": [ 00:41:05.706 "ac3636f9-d2b6-4315-93ef-66eec04b7a5a" 00:41:05.706 ], 00:41:05.706 "product_name": "NVMe disk", 00:41:05.706 "block_size": 512, 00:41:05.706 "num_blocks": 2097152, 00:41:05.706 "uuid": "ac3636f9-d2b6-4315-93ef-66eec04b7a5a", 00:41:05.706 "assigned_rate_limits": { 00:41:05.706 "rw_ios_per_sec": 0, 00:41:05.706 "rw_mbytes_per_sec": 0, 00:41:05.706 "r_mbytes_per_sec": 0, 00:41:05.706 "w_mbytes_per_sec": 0 00:41:05.706 }, 00:41:05.706 "claimed": false, 00:41:05.706 "zoned": false, 00:41:05.706 "supported_io_types": { 00:41:05.706 "read": true, 00:41:05.706 "write": true, 00:41:05.706 "unmap": false, 00:41:05.706 "flush": true, 00:41:05.706 "reset": true, 00:41:05.706 "nvme_admin": true, 00:41:05.706 "nvme_io": true, 00:41:05.706 "nvme_io_md": false, 00:41:05.706 "write_zeroes": true, 00:41:05.706 "zcopy": false, 00:41:05.706 "get_zone_info": false, 00:41:05.706 "zone_management": false, 00:41:05.706 "zone_append": false, 00:41:05.706 "compare": true, 00:41:05.706 "compare_and_write": true, 00:41:05.706 "abort": true, 00:41:05.706 "seek_hole": false, 00:41:05.706 "seek_data": false, 00:41:05.706 "copy": true, 00:41:05.706 "nvme_iov_md": false 00:41:05.706 }, 00:41:05.706 "memory_domains": [ 00:41:05.706 { 00:41:05.706 "dma_device_id": "system", 00:41:05.706 "dma_device_type": 1 00:41:05.706 } 00:41:05.706 ], 00:41:05.706 "driver_specific": { 00:41:05.706 "nvme": [ 00:41:05.706 { 00:41:05.706 "trid": { 00:41:05.706 "trtype": "TCP", 00:41:05.706 "adrfam": "IPv4", 00:41:05.706 "traddr": "10.0.0.2", 00:41:05.706 "trsvcid": "4420", 00:41:05.706 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:05.706 }, 00:41:05.706 "ctrlr_data": { 00:41:05.706 "cntlid": 1, 00:41:05.706 "vendor_id": "0x8086", 00:41:05.706 "model_number": "SPDK bdev Controller", 00:41:05.706 "serial_number": "00000000000000000000", 00:41:05.706 "firmware_revision": "24.09", 00:41:05.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.706 "oacs": { 00:41:05.706 "security": 0, 00:41:05.706 "format": 0, 00:41:05.706 "firmware": 0, 00:41:05.706 "ns_manage": 0 00:41:05.706 }, 00:41:05.706 "multi_ctrlr": true, 00:41:05.706 "ana_reporting": false 00:41:05.706 }, 00:41:05.706 "vs": { 00:41:05.706 "nvme_version": "1.3" 00:41:05.706 }, 00:41:05.706 "ns_data": { 00:41:05.706 "id": 1, 00:41:05.706 "can_share": true 00:41:05.706 } 00:41:05.706 } 00:41:05.706 ], 00:41:05.706 "mp_policy": "active_passive" 00:41:05.706 } 00:41:05.706 } 00:41:05.706 ] 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.706 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.706 [2024-07-23 08:54:18.225968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:05.706 [2024-07-23 08:54:18.226238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:41:05.966 [2024-07-23 08:54:18.359853] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:05.966 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.966 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:41:05.966 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.966 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.966 [ 00:41:05.966 { 00:41:05.966 "name": "nvme0n1", 00:41:05.966 "aliases": [ 00:41:05.966 "ac3636f9-d2b6-4315-93ef-66eec04b7a5a" 00:41:05.966 ], 00:41:05.966 "product_name": "NVMe disk", 00:41:05.966 "block_size": 512, 00:41:05.966 "num_blocks": 2097152, 00:41:05.966 "uuid": "ac3636f9-d2b6-4315-93ef-66eec04b7a5a", 00:41:05.966 "assigned_rate_limits": { 00:41:05.966 "rw_ios_per_sec": 0, 00:41:05.966 "rw_mbytes_per_sec": 0, 00:41:05.966 "r_mbytes_per_sec": 0, 00:41:05.966 "w_mbytes_per_sec": 0 00:41:05.966 }, 00:41:05.966 "claimed": false, 00:41:05.966 "zoned": false, 00:41:05.966 "supported_io_types": { 00:41:05.966 "read": true, 00:41:05.966 "write": true, 00:41:05.966 "unmap": false, 00:41:05.967 "flush": true, 00:41:05.967 "reset": true, 00:41:05.967 "nvme_admin": true, 00:41:05.967 "nvme_io": true, 00:41:05.967 "nvme_io_md": false, 00:41:05.967 "write_zeroes": true, 00:41:05.967 "zcopy": false, 00:41:05.967 "get_zone_info": false, 00:41:05.967 "zone_management": false, 00:41:05.967 "zone_append": false, 00:41:05.967 "compare": true, 00:41:05.967 "compare_and_write": true, 00:41:05.967 "abort": true, 00:41:05.967 "seek_hole": false, 00:41:05.967 "seek_data": false, 00:41:05.967 "copy": true, 00:41:05.967 "nvme_iov_md": false 00:41:05.967 }, 00:41:05.967 "memory_domains": [ 00:41:05.967 { 00:41:05.967 "dma_device_id": "system", 00:41:05.967 "dma_device_type": 1 00:41:05.967 } 00:41:05.967 ], 00:41:05.967 "driver_specific": { 00:41:05.967 "nvme": [ 00:41:05.967 { 00:41:05.967 "trid": { 00:41:05.967 "trtype": "TCP", 00:41:05.967 "adrfam": "IPv4", 00:41:05.967 "traddr": "10.0.0.2", 00:41:05.967 "trsvcid": "4420", 00:41:05.967 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:05.967 }, 00:41:05.967 "ctrlr_data": { 00:41:05.967 "cntlid": 2, 00:41:05.967 "vendor_id": "0x8086", 00:41:05.967 "model_number": "SPDK bdev Controller", 00:41:05.967 "serial_number": "00000000000000000000", 00:41:05.967 "firmware_revision": "24.09", 00:41:05.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.967 "oacs": { 00:41:05.967 "security": 0, 00:41:05.967 "format": 0, 00:41:05.967 "firmware": 0, 00:41:05.967 "ns_manage": 0 00:41:05.967 }, 00:41:05.967 "multi_ctrlr": true, 00:41:05.967 "ana_reporting": false 00:41:05.967 }, 00:41:05.967 "vs": { 00:41:05.967 "nvme_version": "1.3" 00:41:05.967 }, 00:41:05.967 "ns_data": { 00:41:05.967 "id": 1, 00:41:05.967 "can_share": true 00:41:05.967 } 00:41:05.967 } 00:41:05.967 ], 00:41:05.967 "mp_policy": "active_passive" 00:41:05.967 } 00:41:05.967 } 00:41:05.967 ] 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Ny0FbrUa52 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Ny0FbrUa52 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.967 [2024-07-23 08:54:18.427487] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:05.967 [2024-07-23 08:54:18.427830] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ny0FbrUa52 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.967 [2024-07-23 08:54:18.435494] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ny0FbrUa52 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.967 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:05.967 [2024-07-23 08:54:18.449288] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:05.967 [2024-07-23 08:54:18.449487] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:41:06.227 nvme0n1 00:41:06.227 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.227 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:41:06.227 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.227 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:06.227 [ 00:41:06.227 { 00:41:06.227 "name": "nvme0n1", 00:41:06.227 "aliases": [ 00:41:06.227 "ac3636f9-d2b6-4315-93ef-66eec04b7a5a" 00:41:06.227 ], 00:41:06.227 "product_name": "NVMe disk", 00:41:06.227 "block_size": 512, 00:41:06.227 "num_blocks": 2097152, 00:41:06.227 "uuid": "ac3636f9-d2b6-4315-93ef-66eec04b7a5a", 00:41:06.227 "assigned_rate_limits": { 00:41:06.227 "rw_ios_per_sec": 0, 00:41:06.227 "rw_mbytes_per_sec": 0, 00:41:06.227 "r_mbytes_per_sec": 0, 00:41:06.227 "w_mbytes_per_sec": 0 00:41:06.227 }, 00:41:06.227 "claimed": false, 00:41:06.227 "zoned": false, 00:41:06.227 "supported_io_types": { 00:41:06.227 "read": true, 00:41:06.227 "write": true, 00:41:06.227 "unmap": false, 00:41:06.227 "flush": true, 00:41:06.227 "reset": true, 00:41:06.227 "nvme_admin": true, 00:41:06.227 "nvme_io": true, 00:41:06.227 "nvme_io_md": false, 00:41:06.227 "write_zeroes": true, 00:41:06.227 "zcopy": false, 00:41:06.227 "get_zone_info": false, 00:41:06.227 "zone_management": false, 00:41:06.227 "zone_append": false, 00:41:06.228 "compare": true, 00:41:06.228 "compare_and_write": true, 00:41:06.228 "abort": true, 00:41:06.228 "seek_hole": false, 00:41:06.228 "seek_data": false, 00:41:06.228 "copy": true, 00:41:06.228 "nvme_iov_md": false 00:41:06.228 }, 00:41:06.228 "memory_domains": [ 00:41:06.228 { 00:41:06.228 "dma_device_id": "system", 00:41:06.228 "dma_device_type": 1 00:41:06.228 } 00:41:06.228 ], 00:41:06.228 "driver_specific": { 00:41:06.228 "nvme": [ 00:41:06.228 { 00:41:06.228 "trid": { 00:41:06.228 "trtype": "TCP", 00:41:06.228 "adrfam": "IPv4", 00:41:06.228 "traddr": "10.0.0.2", 00:41:06.228 "trsvcid": "4421", 00:41:06.228 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:06.228 }, 00:41:06.228 "ctrlr_data": { 00:41:06.228 "cntlid": 3, 00:41:06.228 "vendor_id": "0x8086", 00:41:06.228 "model_number": "SPDK bdev Controller", 00:41:06.228 "serial_number": "00000000000000000000", 00:41:06.228 "firmware_revision": "24.09", 00:41:06.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.228 "oacs": { 00:41:06.228 "security": 0, 00:41:06.228 "format": 0, 00:41:06.228 "firmware": 0, 00:41:06.228 "ns_manage": 0 00:41:06.228 }, 00:41:06.228 "multi_ctrlr": true, 00:41:06.228 "ana_reporting": false 00:41:06.228 }, 00:41:06.228 "vs": { 00:41:06.228 "nvme_version": "1.3" 00:41:06.228 }, 00:41:06.228 "ns_data": { 00:41:06.228 "id": 1, 00:41:06.228 "can_share": true 00:41:06.228 } 00:41:06.228 } 00:41:06.228 ], 00:41:06.228 "mp_policy": "active_passive" 00:41:06.228 } 00:41:06.228 } 00:41:06.228 ] 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Ny0FbrUa52 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:06.228 rmmod nvme_tcp 00:41:06.228 rmmod nvme_fabrics 00:41:06.228 rmmod nvme_keyring 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2471385 ']' 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2471385 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2471385 ']' 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2471385 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2471385 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2471385' 00:41:06.228 killing process with pid 2471385 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2471385 00:41:06.228 [2024-07-23 08:54:18.727893] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:41:06.228 [2024-07-23 08:54:18.728024] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:41:06.228 08:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2471385 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:08.770 08:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:10.708 00:41:10.708 real 0m9.793s 00:41:10.708 user 0m5.624s 00:41:10.708 sys 0m3.231s 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:41:10.708 ************************************ 00:41:10.708 END TEST nvmf_async_init 00:41:10.708 ************************************ 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.708 ************************************ 00:41:10.708 START TEST dma 00:41:10.708 ************************************ 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:41:10.708 * Looking for test storage... 00:41:10.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.708 08:54:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:41:10.709 08:54:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:41:10.709 00:41:10.709 real 0m0.120s 00:41:10.709 user 0m0.053s 00:41:10.709 sys 0m0.076s 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:41:10.709 ************************************ 00:41:10.709 END TEST dma 00:41:10.709 ************************************ 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:10.709 ************************************ 00:41:10.709 START TEST nvmf_identify 00:41:10.709 ************************************ 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:41:10.709 * Looking for test storage... 00:41:10.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:10.709 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.969 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.970 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.970 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:10.970 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:10.970 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:41:10.970 08:54:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:13.511 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:13.511 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:13.511 Found net devices under 0000:84:00.0: cvl_0_0 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:13.511 Found net devices under 0000:84:00.1: cvl_0_1 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:13.511 08:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:13.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:13.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:41:13.772 00:41:13.772 --- 10.0.0.2 ping statistics --- 00:41:13.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:13.772 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:13.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:13.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:41:13.772 00:41:13.772 --- 10.0.0.1 ping statistics --- 00:41:13.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:13.772 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2473914 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2473914 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2473914 ']' 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:13.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:13.772 08:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:14.033 [2024-07-23 08:54:26.363633] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:14.033 [2024-07-23 08:54:26.363957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:14.293 EAL: No free 2048 kB hugepages reported on node 1 00:41:14.293 [2024-07-23 08:54:26.693051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:14.862 [2024-07-23 08:54:27.213241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:14.862 [2024-07-23 08:54:27.213388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:14.862 [2024-07-23 08:54:27.213423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:14.862 [2024-07-23 08:54:27.213449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:14.862 [2024-07-23 08:54:27.213476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:14.862 [2024-07-23 08:54:27.213603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.862 [2024-07-23 08:54:27.213661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:14.862 [2024-07-23 08:54:27.213708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.862 [2024-07-23 08:54:27.213722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 [2024-07-23 08:54:27.706410] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 Malloc0 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 [2024-07-23 08:54:27.871299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:15.432 [ 00:41:15.432 { 00:41:15.432 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:15.432 "subtype": "Discovery", 00:41:15.432 "listen_addresses": [ 00:41:15.432 { 00:41:15.432 "trtype": "TCP", 00:41:15.432 "adrfam": "IPv4", 00:41:15.432 "traddr": "10.0.0.2", 00:41:15.432 "trsvcid": "4420" 00:41:15.432 } 00:41:15.432 ], 00:41:15.432 "allow_any_host": true, 00:41:15.432 "hosts": [] 00:41:15.432 }, 00:41:15.432 { 00:41:15.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:15.432 "subtype": "NVMe", 00:41:15.432 "listen_addresses": [ 00:41:15.432 { 00:41:15.432 "trtype": "TCP", 00:41:15.432 "adrfam": "IPv4", 00:41:15.432 "traddr": "10.0.0.2", 00:41:15.432 "trsvcid": "4420" 00:41:15.432 } 00:41:15.432 ], 00:41:15.432 "allow_any_host": true, 00:41:15.432 "hosts": [], 00:41:15.432 "serial_number": "SPDK00000000000001", 00:41:15.432 "model_number": "SPDK bdev Controller", 00:41:15.432 "max_namespaces": 32, 00:41:15.432 "min_cntlid": 1, 00:41:15.432 "max_cntlid": 65519, 00:41:15.432 "namespaces": [ 00:41:15.432 { 00:41:15.432 "nsid": 1, 00:41:15.432 "bdev_name": "Malloc0", 00:41:15.432 "name": "Malloc0", 00:41:15.432 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:41:15.432 "eui64": "ABCDEF0123456789", 00:41:15.432 "uuid": "e55b5af7-1b7c-4dfd-b202-3da18c6c86a5" 00:41:15.432 } 00:41:15.432 ] 00:41:15.432 } 00:41:15.432 ] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:15.432 08:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:41:15.694 [2024-07-23 08:54:27.957722] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:15.694 [2024-07-23 08:54:27.957962] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474197 ] 00:41:15.694 EAL: No free 2048 kB hugepages reported on node 1 00:41:15.694 [2024-07-23 08:54:28.071075] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:41:15.694 [2024-07-23 08:54:28.071241] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:41:15.694 [2024-07-23 08:54:28.071271] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:41:15.694 [2024-07-23 08:54:28.075325] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:41:15.694 [2024-07-23 08:54:28.075371] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:41:15.694 [2024-07-23 08:54:28.075756] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:41:15.694 [2024-07-23 08:54:28.075856] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:41:15.694 [2024-07-23 08:54:28.090347] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:41:15.694 [2024-07-23 08:54:28.090391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:41:15.694 [2024-07-23 08:54:28.090413] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:41:15.694 [2024-07-23 08:54:28.090427] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:41:15.694 [2024-07-23 08:54:28.090535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.694 [2024-07-23 08:54:28.090565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.694 [2024-07-23 08:54:28.090590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.694 [2024-07-23 08:54:28.090636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:41:15.694 [2024-07-23 08:54:28.090691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.694 [2024-07-23 08:54:28.098342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.694 [2024-07-23 08:54:28.098387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.694 [2024-07-23 08:54:28.098405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.694 [2024-07-23 08:54:28.098425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.694 [2024-07-23 08:54:28.098471] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:41:15.694 [2024-07-23 08:54:28.098502] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:41:15.694 [2024-07-23 08:54:28.098524] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:41:15.694 [2024-07-23 08:54:28.098564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.694 [2024-07-23 08:54:28.098589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.098617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.695 [2024-07-23 08:54:28.098652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.695 [2024-07-23 08:54:28.098702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.695 [2024-07-23 08:54:28.098946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.695 [2024-07-23 08:54:28.098984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.695 [2024-07-23 08:54:28.099003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.099019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.695 [2024-07-23 08:54:28.099042] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:41:15.695 [2024-07-23 08:54:28.099078] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:41:15.695 [2024-07-23 08:54:28.099106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.099124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.099139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.695 [2024-07-23 08:54:28.099171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.695 [2024-07-23 08:54:28.099225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.695 [2024-07-23 08:54:28.099450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.695 [2024-07-23 08:54:28.099480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.695 [2024-07-23 08:54:28.099496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.099510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.695 [2024-07-23 08:54:28.099531] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:41:15.695 [2024-07-23 08:54:28.099570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:41:15.695 [2024-07-23 08:54:28.099598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.099616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.099638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.695 [2024-07-23 08:54:28.099669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.695 [2024-07-23 08:54:28.099714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.695 [2024-07-23 08:54:28.099933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.695 [2024-07-23 08:54:28.099961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.695 [2024-07-23 08:54:28.099976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.099990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.695 [2024-07-23 08:54:28.100011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:41:15.695 [2024-07-23 08:54:28.100047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.100076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.100093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.695 [2024-07-23 08:54:28.100119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.695 [2024-07-23 08:54:28.100168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.695 [2024-07-23 08:54:28.100389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.695 [2024-07-23 08:54:28.100419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.695 [2024-07-23 08:54:28.100434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.100449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.695 [2024-07-23 08:54:28.100469] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:41:15.695 [2024-07-23 08:54:28.100495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:41:15.695 [2024-07-23 08:54:28.100525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:41:15.695 [2024-07-23 08:54:28.100647] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:41:15.695 [2024-07-23 08:54:28.100665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:41:15.695 [2024-07-23 08:54:28.100696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.100714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.100730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.695 [2024-07-23 08:54:28.100761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.695 [2024-07-23 08:54:28.100808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.695 [2024-07-23 08:54:28.101041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.695 [2024-07-23 08:54:28.101068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.695 [2024-07-23 08:54:28.101083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.101098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.695 [2024-07-23 08:54:28.101118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:41:15.695 [2024-07-23 08:54:28.101162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.101182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.101198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.695 [2024-07-23 08:54:28.101223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.695 [2024-07-23 08:54:28.101265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.695 [2024-07-23 08:54:28.101479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.695 [2024-07-23 08:54:28.101509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.695 [2024-07-23 08:54:28.101524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.101545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.695 [2024-07-23 08:54:28.101565] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:41:15.695 [2024-07-23 08:54:28.101599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:41:15.695 [2024-07-23 08:54:28.101630] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:41:15.695 [2024-07-23 08:54:28.101669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:41:15.695 [2024-07-23 08:54:28.101707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.101733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.695 [2024-07-23 08:54:28.101760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.695 [2024-07-23 08:54:28.101803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.695 [2024-07-23 08:54:28.102097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:15.695 [2024-07-23 08:54:28.102126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:15.695 [2024-07-23 08:54:28.102149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.102167] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:41:15.695 [2024-07-23 08:54:28.102185] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:41:15.695 [2024-07-23 08:54:28.102204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.102230] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.102250] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.102277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.695 [2024-07-23 08:54:28.102299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.695 [2024-07-23 08:54:28.106334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.106357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.695 [2024-07-23 08:54:28.106399] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:41:15.695 [2024-07-23 08:54:28.106423] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:41:15.695 [2024-07-23 08:54:28.106451] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:41:15.695 [2024-07-23 08:54:28.106472] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:41:15.695 [2024-07-23 08:54:28.106489] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:41:15.695 [2024-07-23 08:54:28.106507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:41:15.695 [2024-07-23 08:54:28.106540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:41:15.695 [2024-07-23 08:54:28.106567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.695 [2024-07-23 08:54:28.106585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.106600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.106628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:15.696 [2024-07-23 08:54:28.106682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.696 [2024-07-23 08:54:28.106908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.696 [2024-07-23 08:54:28.106937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.696 [2024-07-23 08:54:28.106952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.106973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.696 [2024-07-23 08:54:28.107017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.107084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:15.696 [2024-07-23 08:54:28.107108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.107159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:15.696 [2024-07-23 08:54:28.107179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.107228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:15.696 [2024-07-23 08:54:28.107258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.107321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:15.696 [2024-07-23 08:54:28.107344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:41:15.696 [2024-07-23 08:54:28.107381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:41:15.696 [2024-07-23 08:54:28.107414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.107457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.696 [2024-07-23 08:54:28.107510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:15.696 [2024-07-23 08:54:28.107534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:41:15.696 [2024-07-23 08:54:28.107551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:41:15.696 [2024-07-23 08:54:28.107567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:15.696 [2024-07-23 08:54:28.107589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:15.696 [2024-07-23 08:54:28.107840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.696 [2024-07-23 08:54:28.107870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.696 [2024-07-23 08:54:28.107885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.107900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:15.696 [2024-07-23 08:54:28.107922] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:41:15.696 [2024-07-23 08:54:28.107942] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:41:15.696 [2024-07-23 08:54:28.107991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.108014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.108040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.696 [2024-07-23 08:54:28.108083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:15.696 [2024-07-23 08:54:28.108364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:15.696 [2024-07-23 08:54:28.108394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:15.696 [2024-07-23 08:54:28.108410] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.108432] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:41:15.696 [2024-07-23 08:54:28.108449] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:41:15.696 [2024-07-23 08:54:28.108465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.108504] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.108524] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.149508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.696 [2024-07-23 08:54:28.149549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.696 [2024-07-23 08:54:28.149566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.149583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:15.696 [2024-07-23 08:54:28.149636] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:41:15.696 [2024-07-23 08:54:28.149733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.149757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.149798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.696 [2024-07-23 08:54:28.149825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.149842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.149857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.149880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:41:15.696 [2024-07-23 08:54:28.149927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:15.696 [2024-07-23 08:54:28.149952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:41:15.696 [2024-07-23 08:54:28.150405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:15.696 [2024-07-23 08:54:28.150436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:15.696 [2024-07-23 08:54:28.150452] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.150468] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:41:15.696 [2024-07-23 08:54:28.150485] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:41:15.696 [2024-07-23 08:54:28.150508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.150533] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.150551] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.150577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.696 [2024-07-23 08:54:28.150607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.696 [2024-07-23 08:54:28.150624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.150640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:41:15.696 [2024-07-23 08:54:28.195345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.696 [2024-07-23 08:54:28.195383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.696 [2024-07-23 08:54:28.195399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.195414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:15.696 [2024-07-23 08:54:28.195465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.195488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:15.696 [2024-07-23 08:54:28.195517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.696 [2024-07-23 08:54:28.195577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:15.696 [2024-07-23 08:54:28.195882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:15.696 [2024-07-23 08:54:28.195910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:15.696 [2024-07-23 08:54:28.195924] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.195939] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:41:15.696 [2024-07-23 08:54:28.195955] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:41:15.696 [2024-07-23 08:54:28.195970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.195993] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.196010] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.196040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.696 [2024-07-23 08:54:28.196061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.696 [2024-07-23 08:54:28.196076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.696 [2024-07-23 08:54:28.196090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:15.696 [2024-07-23 08:54:28.196127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.697 [2024-07-23 08:54:28.196150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:15.697 [2024-07-23 08:54:28.196186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.697 [2024-07-23 08:54:28.196260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:15.697 [2024-07-23 08:54:28.196533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:15.697 [2024-07-23 08:54:28.196562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:15.697 [2024-07-23 08:54:28.196577] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:15.697 [2024-07-23 08:54:28.196591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:41:15.697 [2024-07-23 08:54:28.196607] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:41:15.697 [2024-07-23 08:54:28.196622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.697 [2024-07-23 08:54:28.196643] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:15.697 [2024-07-23 08:54:28.196659] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:15.958 [2024-07-23 08:54:28.237529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.958 [2024-07-23 08:54:28.237573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.958 [2024-07-23 08:54:28.237590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.958 [2024-07-23 08:54:28.237605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:15.958 ===================================================== 00:41:15.958 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:41:15.958 ===================================================== 00:41:15.958 Controller Capabilities/Features 00:41:15.958 ================================ 00:41:15.958 Vendor ID: 0000 00:41:15.958 Subsystem Vendor ID: 0000 00:41:15.958 Serial Number: .................... 00:41:15.958 Model Number: ........................................ 00:41:15.958 Firmware Version: 24.09 00:41:15.958 Recommended Arb Burst: 0 00:41:15.958 IEEE OUI Identifier: 00 00 00 00:41:15.958 Multi-path I/O 00:41:15.958 May have multiple subsystem ports: No 00:41:15.958 May have multiple controllers: No 00:41:15.958 Associated with SR-IOV VF: No 00:41:15.958 Max Data Transfer Size: 131072 00:41:15.958 Max Number of Namespaces: 0 00:41:15.958 Max Number of I/O Queues: 1024 00:41:15.958 NVMe Specification Version (VS): 1.3 00:41:15.958 NVMe Specification Version (Identify): 1.3 00:41:15.958 Maximum Queue Entries: 128 00:41:15.958 Contiguous Queues Required: Yes 00:41:15.958 Arbitration Mechanisms Supported 00:41:15.958 Weighted Round Robin: Not Supported 00:41:15.958 Vendor Specific: Not Supported 00:41:15.958 Reset Timeout: 15000 ms 00:41:15.958 Doorbell Stride: 4 bytes 00:41:15.958 NVM Subsystem Reset: Not Supported 00:41:15.958 Command Sets Supported 00:41:15.958 NVM Command Set: Supported 00:41:15.958 Boot Partition: Not Supported 00:41:15.958 Memory Page Size Minimum: 4096 bytes 00:41:15.958 Memory Page Size Maximum: 4096 bytes 00:41:15.958 Persistent Memory Region: Not Supported 00:41:15.958 Optional Asynchronous Events Supported 00:41:15.958 Namespace Attribute Notices: Not Supported 00:41:15.958 Firmware Activation Notices: Not Supported 00:41:15.958 ANA Change Notices: Not Supported 00:41:15.958 PLE Aggregate Log Change Notices: Not Supported 00:41:15.958 LBA Status Info Alert Notices: Not Supported 00:41:15.958 EGE Aggregate Log Change Notices: Not Supported 00:41:15.958 Normal NVM Subsystem Shutdown event: Not Supported 00:41:15.958 Zone Descriptor Change Notices: Not Supported 00:41:15.958 Discovery Log Change Notices: Supported 00:41:15.958 Controller Attributes 00:41:15.958 128-bit Host Identifier: Not Supported 00:41:15.958 Non-Operational Permissive Mode: Not Supported 00:41:15.958 NVM Sets: Not Supported 00:41:15.958 Read Recovery Levels: Not Supported 00:41:15.958 Endurance Groups: Not Supported 00:41:15.958 Predictable Latency Mode: Not Supported 00:41:15.958 Traffic Based Keep ALive: Not Supported 00:41:15.958 Namespace Granularity: Not Supported 00:41:15.958 SQ Associations: Not Supported 00:41:15.958 UUID List: Not Supported 00:41:15.958 Multi-Domain Subsystem: Not Supported 00:41:15.958 Fixed Capacity Management: Not Supported 00:41:15.958 Variable Capacity Management: Not Supported 00:41:15.958 Delete Endurance Group: Not Supported 00:41:15.958 Delete NVM Set: Not Supported 00:41:15.958 Extended LBA Formats Supported: Not Supported 00:41:15.958 Flexible Data Placement Supported: Not Supported 00:41:15.958 00:41:15.958 Controller Memory Buffer Support 00:41:15.958 ================================ 00:41:15.958 Supported: No 00:41:15.958 00:41:15.958 Persistent Memory Region Support 00:41:15.958 ================================ 00:41:15.958 Supported: No 00:41:15.958 00:41:15.958 Admin Command Set Attributes 00:41:15.958 ============================ 00:41:15.958 Security Send/Receive: Not Supported 00:41:15.958 Format NVM: Not Supported 00:41:15.958 Firmware Activate/Download: Not Supported 00:41:15.958 Namespace Management: Not Supported 00:41:15.958 Device Self-Test: Not Supported 00:41:15.958 Directives: Not Supported 00:41:15.958 NVMe-MI: Not Supported 00:41:15.958 Virtualization Management: Not Supported 00:41:15.958 Doorbell Buffer Config: Not Supported 00:41:15.958 Get LBA Status Capability: Not Supported 00:41:15.958 Command & Feature Lockdown Capability: Not Supported 00:41:15.958 Abort Command Limit: 1 00:41:15.959 Async Event Request Limit: 4 00:41:15.959 Number of Firmware Slots: N/A 00:41:15.959 Firmware Slot 1 Read-Only: N/A 00:41:15.959 Firmware Activation Without Reset: N/A 00:41:15.959 Multiple Update Detection Support: N/A 00:41:15.959 Firmware Update Granularity: No Information Provided 00:41:15.959 Per-Namespace SMART Log: No 00:41:15.959 Asymmetric Namespace Access Log Page: Not Supported 00:41:15.959 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:41:15.959 Command Effects Log Page: Not Supported 00:41:15.959 Get Log Page Extended Data: Supported 00:41:15.959 Telemetry Log Pages: Not Supported 00:41:15.959 Persistent Event Log Pages: Not Supported 00:41:15.959 Supported Log Pages Log Page: May Support 00:41:15.959 Commands Supported & Effects Log Page: Not Supported 00:41:15.959 Feature Identifiers & Effects Log Page:May Support 00:41:15.959 NVMe-MI Commands & Effects Log Page: May Support 00:41:15.959 Data Area 4 for Telemetry Log: Not Supported 00:41:15.959 Error Log Page Entries Supported: 128 00:41:15.959 Keep Alive: Not Supported 00:41:15.959 00:41:15.959 NVM Command Set Attributes 00:41:15.959 ========================== 00:41:15.959 Submission Queue Entry Size 00:41:15.959 Max: 1 00:41:15.959 Min: 1 00:41:15.959 Completion Queue Entry Size 00:41:15.959 Max: 1 00:41:15.959 Min: 1 00:41:15.959 Number of Namespaces: 0 00:41:15.959 Compare Command: Not Supported 00:41:15.959 Write Uncorrectable Command: Not Supported 00:41:15.959 Dataset Management Command: Not Supported 00:41:15.959 Write Zeroes Command: Not Supported 00:41:15.959 Set Features Save Field: Not Supported 00:41:15.959 Reservations: Not Supported 00:41:15.959 Timestamp: Not Supported 00:41:15.959 Copy: Not Supported 00:41:15.959 Volatile Write Cache: Not Present 00:41:15.959 Atomic Write Unit (Normal): 1 00:41:15.959 Atomic Write Unit (PFail): 1 00:41:15.959 Atomic Compare & Write Unit: 1 00:41:15.959 Fused Compare & Write: Supported 00:41:15.959 Scatter-Gather List 00:41:15.959 SGL Command Set: Supported 00:41:15.959 SGL Keyed: Supported 00:41:15.959 SGL Bit Bucket Descriptor: Not Supported 00:41:15.959 SGL Metadata Pointer: Not Supported 00:41:15.959 Oversized SGL: Not Supported 00:41:15.959 SGL Metadata Address: Not Supported 00:41:15.959 SGL Offset: Supported 00:41:15.959 Transport SGL Data Block: Not Supported 00:41:15.959 Replay Protected Memory Block: Not Supported 00:41:15.959 00:41:15.959 Firmware Slot Information 00:41:15.959 ========================= 00:41:15.959 Active slot: 0 00:41:15.959 00:41:15.959 00:41:15.959 Error Log 00:41:15.959 ========= 00:41:15.959 00:41:15.959 Active Namespaces 00:41:15.959 ================= 00:41:15.959 Discovery Log Page 00:41:15.959 ================== 00:41:15.959 Generation Counter: 2 00:41:15.959 Number of Records: 2 00:41:15.959 Record Format: 0 00:41:15.959 00:41:15.959 Discovery Log Entry 0 00:41:15.959 ---------------------- 00:41:15.959 Transport Type: 3 (TCP) 00:41:15.959 Address Family: 1 (IPv4) 00:41:15.959 Subsystem Type: 3 (Current Discovery Subsystem) 00:41:15.959 Entry Flags: 00:41:15.959 Duplicate Returned Information: 1 00:41:15.959 Explicit Persistent Connection Support for Discovery: 1 00:41:15.959 Transport Requirements: 00:41:15.959 Secure Channel: Not Required 00:41:15.959 Port ID: 0 (0x0000) 00:41:15.959 Controller ID: 65535 (0xffff) 00:41:15.959 Admin Max SQ Size: 128 00:41:15.959 Transport Service Identifier: 4420 00:41:15.959 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:41:15.959 Transport Address: 10.0.0.2 00:41:15.959 Discovery Log Entry 1 00:41:15.959 ---------------------- 00:41:15.959 Transport Type: 3 (TCP) 00:41:15.959 Address Family: 1 (IPv4) 00:41:15.959 Subsystem Type: 2 (NVM Subsystem) 00:41:15.959 Entry Flags: 00:41:15.959 Duplicate Returned Information: 0 00:41:15.959 Explicit Persistent Connection Support for Discovery: 0 00:41:15.959 Transport Requirements: 00:41:15.959 Secure Channel: Not Required 00:41:15.959 Port ID: 0 (0x0000) 00:41:15.959 Controller ID: 65535 (0xffff) 00:41:15.959 Admin Max SQ Size: 128 00:41:15.959 Transport Service Identifier: 4420 00:41:15.959 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:41:15.959 Transport Address: 10.0.0.2 [2024-07-23 08:54:28.237861] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:41:15.959 [2024-07-23 08:54:28.237905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:15.959 [2024-07-23 08:54:28.237934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:15.959 [2024-07-23 08:54:28.237955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:41:15.959 [2024-07-23 08:54:28.237974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:15.959 [2024-07-23 08:54:28.237991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:41:15.959 [2024-07-23 08:54:28.238009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:15.959 [2024-07-23 08:54:28.238026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:15.959 [2024-07-23 08:54:28.238044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:15.959 [2024-07-23 08:54:28.238073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.238092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.238107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:15.959 [2024-07-23 08:54:28.238135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.959 [2024-07-23 08:54:28.238193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:15.959 [2024-07-23 08:54:28.238428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.959 [2024-07-23 08:54:28.238459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.959 [2024-07-23 08:54:28.238475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.238491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:15.959 [2024-07-23 08:54:28.238521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.238540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.238556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:15.959 [2024-07-23 08:54:28.238589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.959 [2024-07-23 08:54:28.238654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:15.959 [2024-07-23 08:54:28.238900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.959 [2024-07-23 08:54:28.238927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.959 [2024-07-23 08:54:28.238942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.238957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:15.959 [2024-07-23 08:54:28.238977] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:41:15.959 [2024-07-23 08:54:28.238996] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:41:15.959 [2024-07-23 08:54:28.239030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.239057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.239073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:15.959 [2024-07-23 08:54:28.239099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.959 [2024-07-23 08:54:28.239142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:15.959 [2024-07-23 08:54:28.243341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.959 [2024-07-23 08:54:28.243374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.959 [2024-07-23 08:54:28.243390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.243405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:15.959 [2024-07-23 08:54:28.243444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.243464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:15.959 [2024-07-23 08:54:28.243479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:15.959 [2024-07-23 08:54:28.243504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:15.959 [2024-07-23 08:54:28.243548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:15.959 [2024-07-23 08:54:28.243775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:15.959 [2024-07-23 08:54:28.243802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:15.960 [2024-07-23 08:54:28.243817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:15.960 [2024-07-23 08:54:28.243832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:15.960 [2024-07-23 08:54:28.243861] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:41:15.960 00:41:15.960 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:41:15.960 [2024-07-23 08:54:28.418785] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:15.960 [2024-07-23 08:54:28.419027] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474201 ] 00:41:16.223 EAL: No free 2048 kB hugepages reported on node 1 00:41:16.223 [2024-07-23 08:54:28.539001] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:41:16.223 [2024-07-23 08:54:28.539170] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:41:16.223 [2024-07-23 08:54:28.539200] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:41:16.223 [2024-07-23 08:54:28.539244] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:41:16.223 [2024-07-23 08:54:28.539277] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:41:16.223 [2024-07-23 08:54:28.543808] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:41:16.223 [2024-07-23 08:54:28.543910] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:41:16.223 [2024-07-23 08:54:28.557373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:41:16.223 [2024-07-23 08:54:28.557426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:41:16.223 [2024-07-23 08:54:28.557448] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:41:16.223 [2024-07-23 08:54:28.557463] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:41:16.223 [2024-07-23 08:54:28.557565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.557594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.557618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.223 [2024-07-23 08:54:28.557661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:41:16.223 [2024-07-23 08:54:28.557716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.223 [2024-07-23 08:54:28.565342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.223 [2024-07-23 08:54:28.565378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.223 [2024-07-23 08:54:28.565402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.565421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.223 [2024-07-23 08:54:28.565466] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:41:16.223 [2024-07-23 08:54:28.565513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:41:16.223 [2024-07-23 08:54:28.565536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:41:16.223 [2024-07-23 08:54:28.565575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.565599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.565615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.223 [2024-07-23 08:54:28.565643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.223 [2024-07-23 08:54:28.565692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.223 [2024-07-23 08:54:28.565967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.223 [2024-07-23 08:54:28.566010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.223 [2024-07-23 08:54:28.566033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.566050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.223 [2024-07-23 08:54:28.566072] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:41:16.223 [2024-07-23 08:54:28.566103] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:41:16.223 [2024-07-23 08:54:28.566138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.566155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.566170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.223 [2024-07-23 08:54:28.566202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.223 [2024-07-23 08:54:28.566247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.223 [2024-07-23 08:54:28.566472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.223 [2024-07-23 08:54:28.566502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.223 [2024-07-23 08:54:28.566517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.566531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.223 [2024-07-23 08:54:28.566552] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:41:16.223 [2024-07-23 08:54:28.566591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:41:16.223 [2024-07-23 08:54:28.566626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.566645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.566660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.223 [2024-07-23 08:54:28.566696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.223 [2024-07-23 08:54:28.566740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.223 [2024-07-23 08:54:28.566955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.223 [2024-07-23 08:54:28.566984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.223 [2024-07-23 08:54:28.566998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.567013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.223 [2024-07-23 08:54:28.567033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:41:16.223 [2024-07-23 08:54:28.567070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.567091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.567106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.223 [2024-07-23 08:54:28.567132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.223 [2024-07-23 08:54:28.567180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.223 [2024-07-23 08:54:28.567445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.223 [2024-07-23 08:54:28.567474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.223 [2024-07-23 08:54:28.567490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.223 [2024-07-23 08:54:28.567504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.223 [2024-07-23 08:54:28.567523] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:41:16.223 [2024-07-23 08:54:28.567543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:41:16.223 [2024-07-23 08:54:28.567580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:41:16.223 [2024-07-23 08:54:28.567702] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:41:16.223 [2024-07-23 08:54:28.567719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:41:16.224 [2024-07-23 08:54:28.567751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.567769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.567791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.567818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.224 [2024-07-23 08:54:28.567868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.224 [2024-07-23 08:54:28.568131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.224 [2024-07-23 08:54:28.568159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.224 [2024-07-23 08:54:28.568179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.568195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.224 [2024-07-23 08:54:28.568214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:41:16.224 [2024-07-23 08:54:28.568251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.568280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.568295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.568335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.224 [2024-07-23 08:54:28.568382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.224 [2024-07-23 08:54:28.568584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.224 [2024-07-23 08:54:28.568618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.224 [2024-07-23 08:54:28.568635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.568649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.224 [2024-07-23 08:54:28.568666] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:41:16.224 [2024-07-23 08:54:28.568701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.568744] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:41:16.224 [2024-07-23 08:54:28.568777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.568815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.568833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.568860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.224 [2024-07-23 08:54:28.568910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.224 [2024-07-23 08:54:28.569238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.224 [2024-07-23 08:54:28.569268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.224 [2024-07-23 08:54:28.569284] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.569300] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:41:16.224 [2024-07-23 08:54:28.573339] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:41:16.224 [2024-07-23 08:54:28.573365] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.573393] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.573413] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.573441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.224 [2024-07-23 08:54:28.573470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.224 [2024-07-23 08:54:28.573487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.573501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.224 [2024-07-23 08:54:28.573540] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:41:16.224 [2024-07-23 08:54:28.573563] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:41:16.224 [2024-07-23 08:54:28.573586] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:41:16.224 [2024-07-23 08:54:28.573604] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:41:16.224 [2024-07-23 08:54:28.573627] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:41:16.224 [2024-07-23 08:54:28.573646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.573678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.573705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.573723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.573738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.573786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:16.224 [2024-07-23 08:54:28.573838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.224 [2024-07-23 08:54:28.574080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.224 [2024-07-23 08:54:28.574107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.224 [2024-07-23 08:54:28.574122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.224 [2024-07-23 08:54:28.574168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.574239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:16.224 [2024-07-23 08:54:28.574263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.574338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:16.224 [2024-07-23 08:54:28.574361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.574408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:16.224 [2024-07-23 08:54:28.574428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.574476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:16.224 [2024-07-23 08:54:28.574494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.574531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.574574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.574594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.574619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.224 [2024-07-23 08:54:28.574663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:41:16.224 [2024-07-23 08:54:28.574687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:41:16.224 [2024-07-23 08:54:28.574703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:41:16.224 [2024-07-23 08:54:28.574718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:16.224 [2024-07-23 08:54:28.574733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:16.224 [2024-07-23 08:54:28.575044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.224 [2024-07-23 08:54:28.575080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.224 [2024-07-23 08:54:28.575097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.575111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:16.224 [2024-07-23 08:54:28.575133] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:41:16.224 [2024-07-23 08:54:28.575153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.575190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.575220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:41:16.224 [2024-07-23 08:54:28.575243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.575260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.224 [2024-07-23 08:54:28.575275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:16.224 [2024-07-23 08:54:28.575300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:16.224 [2024-07-23 08:54:28.575367] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:16.225 [2024-07-23 08:54:28.575662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.225 [2024-07-23 08:54:28.575690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.225 [2024-07-23 08:54:28.575704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.575719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:16.225 [2024-07-23 08:54:28.575856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.575905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.575941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.575959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:16.225 [2024-07-23 08:54:28.575985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.225 [2024-07-23 08:54:28.576042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:16.225 [2024-07-23 08:54:28.576299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.225 [2024-07-23 08:54:28.576345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.225 [2024-07-23 08:54:28.576363] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.576376] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:41:16.225 [2024-07-23 08:54:28.576392] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:41:16.225 [2024-07-23 08:54:28.576407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.576451] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.576473] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.576499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.225 [2024-07-23 08:54:28.576526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.225 [2024-07-23 08:54:28.576542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.576555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:16.225 [2024-07-23 08:54:28.576616] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:41:16.225 [2024-07-23 08:54:28.576665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.576715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.576751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.576770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:16.225 [2024-07-23 08:54:28.576795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.225 [2024-07-23 08:54:28.576848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:16.225 [2024-07-23 08:54:28.577110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.225 [2024-07-23 08:54:28.577144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.225 [2024-07-23 08:54:28.577160] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.577174] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:41:16.225 [2024-07-23 08:54:28.577189] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:41:16.225 [2024-07-23 08:54:28.577204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.577240] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.577260] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.581322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.225 [2024-07-23 08:54:28.581365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.225 [2024-07-23 08:54:28.581382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.581397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:16.225 [2024-07-23 08:54:28.581452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.581495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.581531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.581558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:16.225 [2024-07-23 08:54:28.581595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.225 [2024-07-23 08:54:28.581643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:16.225 [2024-07-23 08:54:28.581882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.225 [2024-07-23 08:54:28.581912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.225 [2024-07-23 08:54:28.581927] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.581940] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:41:16.225 [2024-07-23 08:54:28.581956] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:41:16.225 [2024-07-23 08:54:28.581977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582016] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582051] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.225 [2024-07-23 08:54:28.582098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.225 [2024-07-23 08:54:28.582113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:16.225 [2024-07-23 08:54:28.582163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.582197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.582228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.582252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.582275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.582296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.582325] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:41:16.225 [2024-07-23 08:54:28.582344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:41:16.225 [2024-07-23 08:54:28.582362] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:41:16.225 [2024-07-23 08:54:28.582432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:16.225 [2024-07-23 08:54:28.582479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.225 [2024-07-23 08:54:28.582527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:41:16.225 [2024-07-23 08:54:28.582589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:41:16.225 [2024-07-23 08:54:28.582642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:16.225 [2024-07-23 08:54:28.582672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:41:16.225 [2024-07-23 08:54:28.582897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.225 [2024-07-23 08:54:28.582927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.225 [2024-07-23 08:54:28.582942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.582965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:16.225 [2024-07-23 08:54:28.582995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.225 [2024-07-23 08:54:28.583022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.225 [2024-07-23 08:54:28.583038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.583051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:41:16.225 [2024-07-23 08:54:28.583085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.583105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:41:16.225 [2024-07-23 08:54:28.583128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.225 [2024-07-23 08:54:28.583170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:41:16.225 [2024-07-23 08:54:28.583441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.225 [2024-07-23 08:54:28.583472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.225 [2024-07-23 08:54:28.583488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.225 [2024-07-23 08:54:28.583502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:41:16.225 [2024-07-23 08:54:28.583536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.583556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:41:16.226 [2024-07-23 08:54:28.583579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.226 [2024-07-23 08:54:28.583620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:41:16.226 [2024-07-23 08:54:28.583883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.226 [2024-07-23 08:54:28.583913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.226 [2024-07-23 08:54:28.583927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.583942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:41:16.226 [2024-07-23 08:54:28.583975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.583995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:41:16.226 [2024-07-23 08:54:28.584019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.226 [2024-07-23 08:54:28.584059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:41:16.226 [2024-07-23 08:54:28.584303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.226 [2024-07-23 08:54:28.584343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.226 [2024-07-23 08:54:28.584358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.584373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:41:16.226 [2024-07-23 08:54:28.584433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.584458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:41:16.226 [2024-07-23 08:54:28.584483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.226 [2024-07-23 08:54:28.584518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.584538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:41:16.226 [2024-07-23 08:54:28.584561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.226 [2024-07-23 08:54:28.584589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.584607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:41:16.226 [2024-07-23 08:54:28.584630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.226 [2024-07-23 08:54:28.584669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.584693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:41:16.226 [2024-07-23 08:54:28.584717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.226 [2024-07-23 08:54:28.584762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:41:16.226 [2024-07-23 08:54:28.584786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:41:16.226 [2024-07-23 08:54:28.584802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:41:16.226 [2024-07-23 08:54:28.584817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:41:16.226 [2024-07-23 08:54:28.585244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.226 [2024-07-23 08:54:28.585273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.226 [2024-07-23 08:54:28.585289] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.585303] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:41:16.226 [2024-07-23 08:54:28.589346] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:41:16.226 [2024-07-23 08:54:28.589367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589429] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589453] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.226 [2024-07-23 08:54:28.589494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.226 [2024-07-23 08:54:28.589508] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589522] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:41:16.226 [2024-07-23 08:54:28.589537] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:41:16.226 [2024-07-23 08:54:28.589551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589581] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589603] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.226 [2024-07-23 08:54:28.589642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.226 [2024-07-23 08:54:28.589656] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589669] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:41:16.226 [2024-07-23 08:54:28.589690] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:41:16.226 [2024-07-23 08:54:28.589705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589725] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589741] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:41:16.226 [2024-07-23 08:54:28.589783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:41:16.226 [2024-07-23 08:54:28.589798] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589812] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:41:16.226 [2024-07-23 08:54:28.589827] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:41:16.226 [2024-07-23 08:54:28.589841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589862] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589877] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.226 [2024-07-23 08:54:28.589914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.226 [2024-07-23 08:54:28.589928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.589950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:41:16.226 [2024-07-23 08:54:28.590003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.226 [2024-07-23 08:54:28.590028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.226 [2024-07-23 08:54:28.590042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.590056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:41:16.226 [2024-07-23 08:54:28.590087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.226 [2024-07-23 08:54:28.590115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.226 [2024-07-23 08:54:28.590130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.590144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:41:16.226 [2024-07-23 08:54:28.590174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.226 [2024-07-23 08:54:28.590197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.226 [2024-07-23 08:54:28.590211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.226 [2024-07-23 08:54:28.590229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:41:16.226 ===================================================== 00:41:16.226 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:16.226 ===================================================== 00:41:16.226 Controller Capabilities/Features 00:41:16.226 ================================ 00:41:16.226 Vendor ID: 8086 00:41:16.226 Subsystem Vendor ID: 8086 00:41:16.226 Serial Number: SPDK00000000000001 00:41:16.226 Model Number: SPDK bdev Controller 00:41:16.226 Firmware Version: 24.09 00:41:16.226 Recommended Arb Burst: 6 00:41:16.226 IEEE OUI Identifier: e4 d2 5c 00:41:16.226 Multi-path I/O 00:41:16.226 May have multiple subsystem ports: Yes 00:41:16.226 May have multiple controllers: Yes 00:41:16.226 Associated with SR-IOV VF: No 00:41:16.226 Max Data Transfer Size: 131072 00:41:16.226 Max Number of Namespaces: 32 00:41:16.226 Max Number of I/O Queues: 127 00:41:16.226 NVMe Specification Version (VS): 1.3 00:41:16.226 NVMe Specification Version (Identify): 1.3 00:41:16.226 Maximum Queue Entries: 128 00:41:16.226 Contiguous Queues Required: Yes 00:41:16.226 Arbitration Mechanisms Supported 00:41:16.226 Weighted Round Robin: Not Supported 00:41:16.226 Vendor Specific: Not Supported 00:41:16.226 Reset Timeout: 15000 ms 00:41:16.226 Doorbell Stride: 4 bytes 00:41:16.226 NVM Subsystem Reset: Not Supported 00:41:16.226 Command Sets Supported 00:41:16.226 NVM Command Set: Supported 00:41:16.226 Boot Partition: Not Supported 00:41:16.226 Memory Page Size Minimum: 4096 bytes 00:41:16.226 Memory Page Size Maximum: 4096 bytes 00:41:16.226 Persistent Memory Region: Not Supported 00:41:16.226 Optional Asynchronous Events Supported 00:41:16.226 Namespace Attribute Notices: Supported 00:41:16.227 Firmware Activation Notices: Not Supported 00:41:16.227 ANA Change Notices: Not Supported 00:41:16.227 PLE Aggregate Log Change Notices: Not Supported 00:41:16.227 LBA Status Info Alert Notices: Not Supported 00:41:16.227 EGE Aggregate Log Change Notices: Not Supported 00:41:16.227 Normal NVM Subsystem Shutdown event: Not Supported 00:41:16.227 Zone Descriptor Change Notices: Not Supported 00:41:16.227 Discovery Log Change Notices: Not Supported 00:41:16.227 Controller Attributes 00:41:16.227 128-bit Host Identifier: Supported 00:41:16.227 Non-Operational Permissive Mode: Not Supported 00:41:16.227 NVM Sets: Not Supported 00:41:16.227 Read Recovery Levels: Not Supported 00:41:16.227 Endurance Groups: Not Supported 00:41:16.227 Predictable Latency Mode: Not Supported 00:41:16.227 Traffic Based Keep ALive: Not Supported 00:41:16.227 Namespace Granularity: Not Supported 00:41:16.227 SQ Associations: Not Supported 00:41:16.227 UUID List: Not Supported 00:41:16.227 Multi-Domain Subsystem: Not Supported 00:41:16.227 Fixed Capacity Management: Not Supported 00:41:16.227 Variable Capacity Management: Not Supported 00:41:16.227 Delete Endurance Group: Not Supported 00:41:16.227 Delete NVM Set: Not Supported 00:41:16.227 Extended LBA Formats Supported: Not Supported 00:41:16.227 Flexible Data Placement Supported: Not Supported 00:41:16.227 00:41:16.227 Controller Memory Buffer Support 00:41:16.227 ================================ 00:41:16.227 Supported: No 00:41:16.227 00:41:16.227 Persistent Memory Region Support 00:41:16.227 ================================ 00:41:16.227 Supported: No 00:41:16.227 00:41:16.227 Admin Command Set Attributes 00:41:16.227 ============================ 00:41:16.227 Security Send/Receive: Not Supported 00:41:16.227 Format NVM: Not Supported 00:41:16.227 Firmware Activate/Download: Not Supported 00:41:16.227 Namespace Management: Not Supported 00:41:16.227 Device Self-Test: Not Supported 00:41:16.227 Directives: Not Supported 00:41:16.227 NVMe-MI: Not Supported 00:41:16.227 Virtualization Management: Not Supported 00:41:16.227 Doorbell Buffer Config: Not Supported 00:41:16.227 Get LBA Status Capability: Not Supported 00:41:16.227 Command & Feature Lockdown Capability: Not Supported 00:41:16.227 Abort Command Limit: 4 00:41:16.227 Async Event Request Limit: 4 00:41:16.227 Number of Firmware Slots: N/A 00:41:16.227 Firmware Slot 1 Read-Only: N/A 00:41:16.227 Firmware Activation Without Reset: N/A 00:41:16.227 Multiple Update Detection Support: N/A 00:41:16.227 Firmware Update Granularity: No Information Provided 00:41:16.227 Per-Namespace SMART Log: No 00:41:16.227 Asymmetric Namespace Access Log Page: Not Supported 00:41:16.227 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:41:16.227 Command Effects Log Page: Supported 00:41:16.227 Get Log Page Extended Data: Supported 00:41:16.227 Telemetry Log Pages: Not Supported 00:41:16.227 Persistent Event Log Pages: Not Supported 00:41:16.227 Supported Log Pages Log Page: May Support 00:41:16.227 Commands Supported & Effects Log Page: Not Supported 00:41:16.227 Feature Identifiers & Effects Log Page:May Support 00:41:16.227 NVMe-MI Commands & Effects Log Page: May Support 00:41:16.227 Data Area 4 for Telemetry Log: Not Supported 00:41:16.227 Error Log Page Entries Supported: 128 00:41:16.227 Keep Alive: Supported 00:41:16.227 Keep Alive Granularity: 10000 ms 00:41:16.227 00:41:16.227 NVM Command Set Attributes 00:41:16.227 ========================== 00:41:16.227 Submission Queue Entry Size 00:41:16.227 Max: 64 00:41:16.227 Min: 64 00:41:16.227 Completion Queue Entry Size 00:41:16.227 Max: 16 00:41:16.227 Min: 16 00:41:16.227 Number of Namespaces: 32 00:41:16.227 Compare Command: Supported 00:41:16.227 Write Uncorrectable Command: Not Supported 00:41:16.227 Dataset Management Command: Supported 00:41:16.227 Write Zeroes Command: Supported 00:41:16.227 Set Features Save Field: Not Supported 00:41:16.227 Reservations: Supported 00:41:16.227 Timestamp: Not Supported 00:41:16.227 Copy: Supported 00:41:16.227 Volatile Write Cache: Present 00:41:16.227 Atomic Write Unit (Normal): 1 00:41:16.227 Atomic Write Unit (PFail): 1 00:41:16.227 Atomic Compare & Write Unit: 1 00:41:16.227 Fused Compare & Write: Supported 00:41:16.227 Scatter-Gather List 00:41:16.227 SGL Command Set: Supported 00:41:16.227 SGL Keyed: Supported 00:41:16.227 SGL Bit Bucket Descriptor: Not Supported 00:41:16.227 SGL Metadata Pointer: Not Supported 00:41:16.227 Oversized SGL: Not Supported 00:41:16.227 SGL Metadata Address: Not Supported 00:41:16.227 SGL Offset: Supported 00:41:16.227 Transport SGL Data Block: Not Supported 00:41:16.227 Replay Protected Memory Block: Not Supported 00:41:16.227 00:41:16.227 Firmware Slot Information 00:41:16.227 ========================= 00:41:16.227 Active slot: 1 00:41:16.227 Slot 1 Firmware Revision: 24.09 00:41:16.227 00:41:16.227 00:41:16.227 Commands Supported and Effects 00:41:16.227 ============================== 00:41:16.227 Admin Commands 00:41:16.227 -------------- 00:41:16.227 Get Log Page (02h): Supported 00:41:16.227 Identify (06h): Supported 00:41:16.227 Abort (08h): Supported 00:41:16.227 Set Features (09h): Supported 00:41:16.227 Get Features (0Ah): Supported 00:41:16.227 Asynchronous Event Request (0Ch): Supported 00:41:16.227 Keep Alive (18h): Supported 00:41:16.227 I/O Commands 00:41:16.227 ------------ 00:41:16.227 Flush (00h): Supported LBA-Change 00:41:16.227 Write (01h): Supported LBA-Change 00:41:16.227 Read (02h): Supported 00:41:16.227 Compare (05h): Supported 00:41:16.227 Write Zeroes (08h): Supported LBA-Change 00:41:16.227 Dataset Management (09h): Supported LBA-Change 00:41:16.227 Copy (19h): Supported LBA-Change 00:41:16.227 00:41:16.227 Error Log 00:41:16.227 ========= 00:41:16.227 00:41:16.227 Arbitration 00:41:16.227 =========== 00:41:16.227 Arbitration Burst: 1 00:41:16.227 00:41:16.227 Power Management 00:41:16.227 ================ 00:41:16.227 Number of Power States: 1 00:41:16.227 Current Power State: Power State #0 00:41:16.227 Power State #0: 00:41:16.227 Max Power: 0.00 W 00:41:16.227 Non-Operational State: Operational 00:41:16.227 Entry Latency: Not Reported 00:41:16.227 Exit Latency: Not Reported 00:41:16.227 Relative Read Throughput: 0 00:41:16.227 Relative Read Latency: 0 00:41:16.227 Relative Write Throughput: 0 00:41:16.227 Relative Write Latency: 0 00:41:16.227 Idle Power: Not Reported 00:41:16.227 Active Power: Not Reported 00:41:16.227 Non-Operational Permissive Mode: Not Supported 00:41:16.227 00:41:16.227 Health Information 00:41:16.227 ================== 00:41:16.227 Critical Warnings: 00:41:16.227 Available Spare Space: OK 00:41:16.227 Temperature: OK 00:41:16.227 Device Reliability: OK 00:41:16.227 Read Only: No 00:41:16.227 Volatile Memory Backup: OK 00:41:16.227 Current Temperature: 0 Kelvin (-273 Celsius) 00:41:16.227 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:41:16.227 Available Spare: 0% 00:41:16.227 Available Spare Threshold: 0% 00:41:16.227 Life Percentage Used:[2024-07-23 08:54:28.590520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.227 [2024-07-23 08:54:28.590546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:41:16.227 [2024-07-23 08:54:28.590573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.227 [2024-07-23 08:54:28.590620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:41:16.227 [2024-07-23 08:54:28.590888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.227 [2024-07-23 08:54:28.590918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.227 [2024-07-23 08:54:28.590935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.227 [2024-07-23 08:54:28.590950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:41:16.227 [2024-07-23 08:54:28.591064] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:41:16.227 [2024-07-23 08:54:28.591113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:41:16.227 [2024-07-23 08:54:28.591151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:16.227 [2024-07-23 08:54:28.591171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.591190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:16.228 [2024-07-23 08:54:28.591206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.591224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:16.228 [2024-07-23 08:54:28.591241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.591267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:16.228 [2024-07-23 08:54:28.591298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.591331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.591348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:16.228 [2024-07-23 08:54:28.591375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.228 [2024-07-23 08:54:28.591424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:16.228 [2024-07-23 08:54:28.591650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.228 [2024-07-23 08:54:28.591680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.228 [2024-07-23 08:54:28.591696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.591712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.591746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.591764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.591779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:16.228 [2024-07-23 08:54:28.591813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.228 [2024-07-23 08:54:28.591870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:16.228 [2024-07-23 08:54:28.592160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.228 [2024-07-23 08:54:28.592187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.228 [2024-07-23 08:54:28.592202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.592216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.592235] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:41:16.228 [2024-07-23 08:54:28.592254] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:41:16.228 [2024-07-23 08:54:28.592288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.592332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.592350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:16.228 [2024-07-23 08:54:28.592381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.228 [2024-07-23 08:54:28.592429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:16.228 [2024-07-23 08:54:28.592648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.228 [2024-07-23 08:54:28.592677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.228 [2024-07-23 08:54:28.592692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.592706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.592745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.592764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.592778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:16.228 [2024-07-23 08:54:28.592802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.228 [2024-07-23 08:54:28.592842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:16.228 [2024-07-23 08:54:28.593104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.228 [2024-07-23 08:54:28.593139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.228 [2024-07-23 08:54:28.593155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.593170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.593205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.593225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.593239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:41:16.228 [2024-07-23 08:54:28.593262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:16.228 [2024-07-23 08:54:28.593302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:41:16.228 [2024-07-23 08:54:28.597358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:41:16.228 [2024-07-23 08:54:28.597390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:41:16.228 [2024-07-23 08:54:28.597414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:41:16.228 [2024-07-23 08:54:28.597429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:41:16.228 [2024-07-23 08:54:28.597461] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:41:16.228 0% 00:41:16.228 Data Units Read: 0 00:41:16.228 Data Units Written: 0 00:41:16.228 Host Read Commands: 0 00:41:16.228 Host Write Commands: 0 00:41:16.228 Controller Busy Time: 0 minutes 00:41:16.228 Power Cycles: 0 00:41:16.228 Power On Hours: 0 hours 00:41:16.228 Unsafe Shutdowns: 0 00:41:16.228 Unrecoverable Media Errors: 0 00:41:16.228 Lifetime Error Log Entries: 0 00:41:16.228 Warning Temperature Time: 0 minutes 00:41:16.228 Critical Temperature Time: 0 minutes 00:41:16.228 00:41:16.228 Number of Queues 00:41:16.228 ================ 00:41:16.228 Number of I/O Submission Queues: 127 00:41:16.228 Number of I/O Completion Queues: 127 00:41:16.228 00:41:16.228 Active Namespaces 00:41:16.228 ================= 00:41:16.228 Namespace ID:1 00:41:16.228 Error Recovery Timeout: Unlimited 00:41:16.228 Command Set Identifier: NVM (00h) 00:41:16.228 Deallocate: Supported 00:41:16.228 Deallocated/Unwritten Error: Not Supported 00:41:16.228 Deallocated Read Value: Unknown 00:41:16.228 Deallocate in Write Zeroes: Not Supported 00:41:16.228 Deallocated Guard Field: 0xFFFF 00:41:16.228 Flush: Supported 00:41:16.228 Reservation: Supported 00:41:16.228 Namespace Sharing Capabilities: Multiple Controllers 00:41:16.228 Size (in LBAs): 131072 (0GiB) 00:41:16.228 Capacity (in LBAs): 131072 (0GiB) 00:41:16.228 Utilization (in LBAs): 131072 (0GiB) 00:41:16.228 NGUID: ABCDEF0123456789ABCDEF0123456789 00:41:16.228 EUI64: ABCDEF0123456789 00:41:16.228 UUID: e55b5af7-1b7c-4dfd-b202-3da18c6c86a5 00:41:16.228 Thin Provisioning: Not Supported 00:41:16.228 Per-NS Atomic Units: Yes 00:41:16.228 Atomic Boundary Size (Normal): 0 00:41:16.228 Atomic Boundary Size (PFail): 0 00:41:16.228 Atomic Boundary Offset: 0 00:41:16.228 Maximum Single Source Range Length: 65535 00:41:16.228 Maximum Copy Length: 65535 00:41:16.228 Maximum Source Range Count: 1 00:41:16.228 NGUID/EUI64 Never Reused: No 00:41:16.228 Namespace Write Protected: No 00:41:16.228 Number of LBA Formats: 1 00:41:16.228 Current LBA Format: LBA Format #00 00:41:16.228 LBA Format #00: Data Size: 512 Metadata Size: 0 00:41:16.228 00:41:16.228 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:41:16.228 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:16.228 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.228 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:16.228 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:16.229 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:16.229 rmmod nvme_tcp 00:41:16.229 rmmod nvme_fabrics 00:41:16.487 rmmod nvme_keyring 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2473914 ']' 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2473914 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2473914 ']' 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2473914 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2473914 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:16.487 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2473914' 00:41:16.487 killing process with pid 2473914 00:41:16.488 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2473914 00:41:16.488 08:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2473914 00:41:19.027 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:19.027 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:19.027 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:19.027 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:19.027 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:19.027 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.028 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:19.028 08:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:20.942 00:41:20.942 real 0m10.119s 00:41:20.942 user 0m14.081s 00:41:20.942 sys 0m3.344s 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:41:20.942 ************************************ 00:41:20.942 END TEST nvmf_identify 00:41:20.942 ************************************ 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:20.942 ************************************ 00:41:20.942 START TEST nvmf_perf 00:41:20.942 ************************************ 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:41:20.942 * Looking for test storage... 00:41:20.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:41:20.942 08:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:24.237 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:24.237 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:24.237 Found net devices under 0000:84:00.0: cvl_0_0 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:24.237 Found net devices under 0000:84:00.1: cvl_0_1 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:24.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:24.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:41:24.237 00:41:24.237 --- 10.0.0.2 ping statistics --- 00:41:24.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.237 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:24.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:24.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:41:24.237 00:41:24.237 --- 10.0.0.1 ping statistics --- 00:41:24.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:24.237 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:24.237 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:24.497 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:41:24.497 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2476539 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2476539 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2476539 ']' 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:24.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:24.498 08:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:41:24.498 [2024-07-23 08:54:36.907666] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:24.498 [2024-07-23 08:54:36.907858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:24.498 EAL: No free 2048 kB hugepages reported on node 1 00:41:24.757 [2024-07-23 08:54:37.119937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:25.329 [2024-07-23 08:54:37.611983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:25.329 [2024-07-23 08:54:37.612117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:25.329 [2024-07-23 08:54:37.612179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:25.329 [2024-07-23 08:54:37.612227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:25.329 [2024-07-23 08:54:37.612273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:25.329 [2024-07-23 08:54:37.612463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:25.329 [2024-07-23 08:54:37.612530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:25.329 [2024-07-23 08:54:37.612558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:25.329 [2024-07-23 08:54:37.612575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:25.897 08:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:41:29.193 08:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:41:29.193 08:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:41:29.762 08:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:41:29.762 08:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:30.344 08:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:41:30.344 08:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:41:30.344 08:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:41:30.344 08:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:41:30.344 08:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:41:30.921 [2024-07-23 08:54:43.357943] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:30.921 08:54:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:31.489 08:54:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:41:31.489 08:54:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:32.427 08:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:41:32.427 08:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:32.997 08:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:33.566 [2024-07-23 08:54:45.819600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:33.566 08:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:34.136 08:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:41:34.136 08:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:41:34.136 08:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:41:34.136 08:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:41:36.043 Initializing NVMe Controllers 00:41:36.043 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:41:36.043 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:41:36.043 Initialization complete. Launching workers. 00:41:36.043 ======================================================== 00:41:36.043 Latency(us) 00:41:36.043 Device Information : IOPS MiB/s Average min max 00:41:36.043 PCIE (0000:82:00.0) NSID 1 from core 0: 54342.62 212.28 590.04 69.66 5962.36 00:41:36.043 ======================================================== 00:41:36.043 Total : 54342.62 212.28 590.04 69.66 5962.36 00:41:36.043 00:41:36.043 08:54:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:36.043 EAL: No free 2048 kB hugepages reported on node 1 00:41:37.424 Initializing NVMe Controllers 00:41:37.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:37.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:37.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:41:37.424 Initialization complete. Launching workers. 00:41:37.424 ======================================================== 00:41:37.424 Latency(us) 00:41:37.424 Device Information : IOPS MiB/s Average min max 00:41:37.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 126.00 0.49 7936.80 290.97 44901.51 00:41:37.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17119.39 4932.92 48008.15 00:41:37.424 ======================================================== 00:41:37.424 Total : 187.00 0.73 10932.19 290.97 48008.15 00:41:37.424 00:41:37.684 08:54:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:37.684 EAL: No free 2048 kB hugepages reported on node 1 00:41:39.066 Initializing NVMe Controllers 00:41:39.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:39.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:39.066 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:41:39.066 Initialization complete. Launching workers. 00:41:39.066 ======================================================== 00:41:39.066 Latency(us) 00:41:39.066 Device Information : IOPS MiB/s Average min max 00:41:39.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4524.99 17.68 7102.45 1066.94 9893.17 00:41:39.066 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2908.99 11.36 11050.61 6180.50 17667.80 00:41:39.066 ======================================================== 00:41:39.066 Total : 7433.98 29.04 8647.41 1066.94 17667.80 00:41:39.066 00:41:39.066 08:54:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:41:39.066 08:54:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:41:39.066 08:54:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:39.066 EAL: No free 2048 kB hugepages reported on node 1 00:41:42.360 Initializing NVMe Controllers 00:41:42.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:42.360 Controller IO queue size 128, less than required. 00:41:42.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:42.360 Controller IO queue size 128, less than required. 00:41:42.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:42.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:42.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:41:42.360 Initialization complete. Launching workers. 00:41:42.360 ======================================================== 00:41:42.360 Latency(us) 00:41:42.360 Device Information : IOPS MiB/s Average min max 00:41:42.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 792.43 198.11 174351.70 88170.15 455759.23 00:41:42.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 498.95 124.74 271235.34 134919.89 595398.72 00:41:42.360 ======================================================== 00:41:42.360 Total : 1291.38 322.85 211784.87 88170.15 595398.72 00:41:42.360 00:41:42.360 08:54:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:41:42.360 EAL: No free 2048 kB hugepages reported on node 1 00:41:42.621 No valid NVMe controllers or AIO or URING devices found 00:41:42.621 Initializing NVMe Controllers 00:41:42.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:42.621 Controller IO queue size 128, less than required. 00:41:42.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:42.621 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:41:42.621 Controller IO queue size 128, less than required. 00:41:42.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:42.621 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:41:42.621 WARNING: Some requested NVMe devices were skipped 00:41:42.621 08:54:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:41:42.621 EAL: No free 2048 kB hugepages reported on node 1 00:41:45.917 Initializing NVMe Controllers 00:41:45.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:45.918 Controller IO queue size 128, less than required. 00:41:45.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:45.918 Controller IO queue size 128, less than required. 00:41:45.918 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:45.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:45.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:41:45.918 Initialization complete. Launching workers. 00:41:45.918 00:41:45.918 ==================== 00:41:45.918 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:41:45.918 TCP transport: 00:41:45.918 polls: 4328 00:41:45.918 idle_polls: 2563 00:41:45.918 sock_completions: 1765 00:41:45.918 nvme_completions: 3187 00:41:45.918 submitted_requests: 4686 00:41:45.918 queued_requests: 1 00:41:45.918 00:41:45.918 ==================== 00:41:45.918 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:41:45.918 TCP transport: 00:41:45.918 polls: 6250 00:41:45.918 idle_polls: 4164 00:41:45.918 sock_completions: 2086 00:41:45.918 nvme_completions: 3659 00:41:45.918 submitted_requests: 5474 00:41:45.918 queued_requests: 1 00:41:45.918 ======================================================== 00:41:45.918 Latency(us) 00:41:45.918 Device Information : IOPS MiB/s Average min max 00:41:45.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 795.51 198.88 173154.60 104252.03 482097.19 00:41:45.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 913.36 228.34 148865.28 79531.81 550549.82 00:41:45.918 ======================================================== 00:41:45.918 Total : 1708.86 427.22 160172.38 79531.81 550549.82 00:41:45.918 00:41:45.918 08:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:41:45.918 08:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:46.488 08:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:41:46.488 08:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:82:00.0 ']' 00:41:46.488 08:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:41:50.685 08:55:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c 00:41:50.685 08:55:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c 00:41:50.685 08:55:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c 00:41:50.685 08:55:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:41:50.685 08:55:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:41:50.685 08:55:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:41:50.685 08:55:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:41:50.685 { 00:41:50.685 "uuid": "82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c", 00:41:50.685 "name": "lvs_0", 00:41:50.685 "base_bdev": "Nvme0n1", 00:41:50.685 "total_data_clusters": 238234, 00:41:50.685 "free_clusters": 238234, 00:41:50.685 "block_size": 512, 00:41:50.685 "cluster_size": 4194304 00:41:50.685 } 00:41:50.685 ]' 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c") .free_clusters' 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c") .cluster_size' 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:41:50.685 952936 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:41:50.685 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c lbd_0 20480 00:41:51.664 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8868a2ee-9cba-49d4-ba24-83414429cd4c 00:41:51.664 08:55:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8868a2ee-9cba-49d4-ba24-83414429cd4c lvs_n_0 00:41:52.602 08:55:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=3246e21b-eab0-4df8-9d7d-2002c5c95dd1 00:41:52.602 08:55:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 3246e21b-eab0-4df8-9d7d-2002c5c95dd1 00:41:52.602 08:55:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=3246e21b-eab0-4df8-9d7d-2002c5c95dd1 00:41:52.602 08:55:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:41:52.602 08:55:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:41:52.602 08:55:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:41:52.602 08:55:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:52.861 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:41:52.861 { 00:41:52.861 "uuid": "82e7d721-cbb4-4ddb-9ed7-8f6ec1796f9c", 00:41:52.861 "name": "lvs_0", 00:41:52.861 "base_bdev": "Nvme0n1", 00:41:52.861 "total_data_clusters": 238234, 00:41:52.861 "free_clusters": 233114, 00:41:52.861 "block_size": 512, 00:41:52.861 "cluster_size": 4194304 00:41:52.861 }, 00:41:52.861 { 00:41:52.861 "uuid": "3246e21b-eab0-4df8-9d7d-2002c5c95dd1", 00:41:52.861 "name": "lvs_n_0", 00:41:52.861 "base_bdev": "8868a2ee-9cba-49d4-ba24-83414429cd4c", 00:41:52.861 "total_data_clusters": 5114, 00:41:52.861 "free_clusters": 5114, 00:41:52.861 "block_size": 512, 00:41:52.861 "cluster_size": 4194304 00:41:52.861 } 00:41:52.861 ]' 00:41:52.861 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3246e21b-eab0-4df8-9d7d-2002c5c95dd1") .free_clusters' 00:41:52.861 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:41:52.861 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3246e21b-eab0-4df8-9d7d-2002c5c95dd1") .cluster_size' 00:41:53.122 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:41:53.122 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:41:53.122 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:41:53.122 20456 00:41:53.122 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:41:53.122 08:55:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3246e21b-eab0-4df8-9d7d-2002c5c95dd1 lbd_nest_0 20456 00:41:53.692 08:55:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=16045cfb-d814-463f-a0de-3dd54ec8ab23 00:41:53.692 08:55:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:54.262 08:55:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:41:54.262 08:55:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 16045cfb-d814-463f-a0de-3dd54ec8ab23 00:41:54.831 08:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:55.401 08:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:41:55.401 08:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:41:55.401 08:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:41:55.401 08:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:41:55.401 08:55:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:55.661 EAL: No free 2048 kB hugepages reported on node 1 00:42:07.926 Initializing NVMe Controllers 00:42:07.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:07.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:07.926 Initialization complete. Launching workers. 00:42:07.926 ======================================================== 00:42:07.927 Latency(us) 00:42:07.927 Device Information : IOPS MiB/s Average min max 00:42:07.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.10 0.02 22208.89 349.55 46890.35 00:42:07.927 ======================================================== 00:42:07.927 Total : 45.10 0.02 22208.89 349.55 46890.35 00:42:07.927 00:42:07.927 08:55:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:42:07.927 08:55:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:07.927 EAL: No free 2048 kB hugepages reported on node 1 00:42:17.918 Initializing NVMe Controllers 00:42:17.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:17.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:17.918 Initialization complete. Launching workers. 00:42:17.918 ======================================================== 00:42:17.918 Latency(us) 00:42:17.918 Device Information : IOPS MiB/s Average min max 00:42:17.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.70 9.34 13386.42 6092.91 47905.09 00:42:17.918 ======================================================== 00:42:17.918 Total : 74.70 9.34 13386.42 6092.91 47905.09 00:42:17.918 00:42:17.918 08:55:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:42:17.918 08:55:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:42:17.918 08:55:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:17.918 EAL: No free 2048 kB hugepages reported on node 1 00:42:27.908 Initializing NVMe Controllers 00:42:27.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:27.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:27.908 Initialization complete. Launching workers. 00:42:27.908 ======================================================== 00:42:27.909 Latency(us) 00:42:27.909 Device Information : IOPS MiB/s Average min max 00:42:27.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4118.20 2.01 7774.93 893.59 12727.54 00:42:27.909 ======================================================== 00:42:27.909 Total : 4118.20 2.01 7774.93 893.59 12727.54 00:42:27.909 00:42:27.909 08:55:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:42:27.909 08:55:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:27.909 EAL: No free 2048 kB hugepages reported on node 1 00:42:37.907 Initializing NVMe Controllers 00:42:37.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:37.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:37.908 Initialization complete. Launching workers. 00:42:37.908 ======================================================== 00:42:37.908 Latency(us) 00:42:37.908 Device Information : IOPS MiB/s Average min max 00:42:37.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2320.70 290.09 13785.29 1139.13 29896.90 00:42:37.908 ======================================================== 00:42:37.908 Total : 2320.70 290.09 13785.29 1139.13 29896.90 00:42:37.908 00:42:37.908 08:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:42:37.908 08:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:42:37.908 08:55:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:37.908 EAL: No free 2048 kB hugepages reported on node 1 00:42:50.137 Initializing NVMe Controllers 00:42:50.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:50.137 Controller IO queue size 128, less than required. 00:42:50.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:50.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:50.137 Initialization complete. Launching workers. 00:42:50.137 ======================================================== 00:42:50.137 Latency(us) 00:42:50.137 Device Information : IOPS MiB/s Average min max 00:42:50.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6895.11 3.37 18574.25 3141.58 38951.75 00:42:50.137 ======================================================== 00:42:50.137 Total : 6895.11 3.37 18574.25 3141.58 38951.75 00:42:50.137 00:42:50.137 08:56:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:42:50.137 08:56:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:50.137 EAL: No free 2048 kB hugepages reported on node 1 00:43:00.124 Initializing NVMe Controllers 00:43:00.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:00.124 Controller IO queue size 128, less than required. 00:43:00.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:00.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:00.124 Initialization complete. Launching workers. 00:43:00.124 ======================================================== 00:43:00.124 Latency(us) 00:43:00.124 Device Information : IOPS MiB/s Average min max 00:43:00.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1056.00 132.00 121416.95 15937.52 254869.02 00:43:00.124 ======================================================== 00:43:00.124 Total : 1056.00 132.00 121416.95 15937.52 254869.02 00:43:00.124 00:43:00.124 08:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:00.124 08:56:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16045cfb-d814-463f-a0de-3dd54ec8ab23 00:43:00.383 08:56:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:43:00.642 08:56:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8868a2ee-9cba-49d4-ba24-83414429cd4c 00:43:01.210 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:01.469 08:56:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:01.469 rmmod nvme_tcp 00:43:01.469 rmmod nvme_fabrics 00:43:01.469 rmmod nvme_keyring 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2476539 ']' 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2476539 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2476539 ']' 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2476539 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2476539 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2476539' 00:43:01.729 killing process with pid 2476539 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2476539 00:43:01.729 08:56:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2476539 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:05.027 08:56:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:07.569 00:43:07.569 real 1m46.183s 00:43:07.569 user 6m33.685s 00:43:07.569 sys 0m18.963s 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:43:07.569 ************************************ 00:43:07.569 END TEST nvmf_perf 00:43:07.569 ************************************ 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:07.569 ************************************ 00:43:07.569 START TEST nvmf_fio_host 00:43:07.569 ************************************ 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:43:07.569 * Looking for test storage... 00:43:07.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:07.569 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:43:07.570 08:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:10.896 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:10.896 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:10.896 Found net devices under 0000:84:00.0: cvl_0_0 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:10.896 Found net devices under 0000:84:00.1: cvl_0_1 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:10.896 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:10.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:10.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:43:10.897 00:43:10.897 --- 10.0.0.2 ping statistics --- 00:43:10.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:10.897 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:10.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:10.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:43:10.897 00:43:10.897 --- 10.0.0.1 ping statistics --- 00:43:10.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:10.897 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:10.897 08:56:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2489512 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2489512 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2489512 ']' 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:10.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:10.897 08:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:43:10.897 [2024-07-23 08:56:23.188104] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:10.897 [2024-07-23 08:56:23.188366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:10.897 EAL: No free 2048 kB hugepages reported on node 1 00:43:10.897 [2024-07-23 08:56:23.396882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:11.465 [2024-07-23 08:56:23.904637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:11.465 [2024-07-23 08:56:23.904757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:11.465 [2024-07-23 08:56:23.904819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:11.465 [2024-07-23 08:56:23.904864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:11.465 [2024-07-23 08:56:23.904910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:11.465 [2024-07-23 08:56:23.905153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:11.465 [2024-07-23 08:56:23.905221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:11.465 [2024-07-23 08:56:23.905274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:11.465 [2024-07-23 08:56:23.905287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:43:12.035 08:56:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:12.035 08:56:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:43:12.035 08:56:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:12.606 [2024-07-23 08:56:24.995674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:12.606 08:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:43:12.606 08:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:12.606 08:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:43:12.606 08:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:43:13.173 Malloc1 00:43:13.173 08:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:13.739 08:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:13.997 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:14.255 [2024-07-23 08:56:26.552817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:14.255 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:43:14.515 08:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:14.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:43:14.775 fio-3.35 00:43:14.775 Starting 1 thread 00:43:15.034 EAL: No free 2048 kB hugepages reported on node 1 00:43:17.577 00:43:17.577 test: (groupid=0, jobs=1): err= 0: pid=2490135: Tue Jul 23 08:56:29 2024 00:43:17.577 read: IOPS=4606, BW=18.0MiB/s (18.9MB/s)(36.1MiB/2004msec) 00:43:17.577 slat (usec): min=4, max=337, avg= 9.43, stdev= 4.76 00:43:17.577 clat (usec): min=4141, max=18469, avg=14475.07, stdev=1263.48 00:43:17.577 lat (usec): min=4151, max=18479, avg=14484.50, stdev=1263.56 00:43:17.577 clat percentiles (usec): 00:43:17.577 | 1.00th=[ 7963], 5.00th=[13173], 10.00th=[13829], 20.00th=[14091], 00:43:17.577 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:43:17.577 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15401], 95.00th=[15795], 00:43:17.577 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:43:17.577 | 99.99th=[18482] 00:43:17.577 bw ( KiB/s): min=17157, max=19216, per=99.43%, avg=18323.25, stdev=856.69, samples=4 00:43:17.577 iops : min= 4289, max= 4804, avg=4580.75, stdev=214.29, samples=4 00:43:17.577 write: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(36.1MiB/2004msec); 0 zone resets 00:43:17.577 slat (usec): min=4, max=238, avg= 9.98, stdev= 3.61 00:43:17.577 clat (usec): min=2761, max=16777, avg=13054.00, stdev=1276.46 00:43:17.577 lat (usec): min=2793, max=16787, avg=13063.98, stdev=1276.97 00:43:17.577 clat percentiles (usec): 00:43:17.577 | 1.00th=[ 6915], 5.00th=[10945], 10.00th=[11994], 20.00th=[12649], 00:43:17.577 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:43:17.577 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14222], 00:43:17.577 | 99.00th=[14484], 99.50th=[15008], 99.90th=[15401], 99.95th=[15664], 00:43:17.577 | 99.99th=[16712] 00:43:17.577 bw ( KiB/s): min=17936, max=19224, per=99.48%, avg=18336.75, stdev=596.63, samples=4 00:43:17.577 iops : min= 4484, max= 4806, avg=4584.00, stdev=149.25, samples=4 00:43:17.577 lat (msec) : 4=0.06%, 10=2.77%, 20=97.17% 00:43:17.577 cpu : usr=77.48%, sys=21.07%, ctx=8, majf=0, minf=1535 00:43:17.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:43:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:17.577 issued rwts: total=9232,9234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:17.577 00:43:17.577 Run status group 0 (all jobs): 00:43:17.577 READ: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=36.1MiB (37.8MB), run=2004-2004msec 00:43:17.577 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=36.1MiB (37.8MB), run=2004-2004msec 00:43:17.838 ----------------------------------------------------- 00:43:17.838 Suppressions used: 00:43:17.838 count bytes template 00:43:17.838 1 57 /usr/src/fio/parse.c 00:43:17.838 1 8 libtcmalloc_minimal.so 00:43:17.838 ----------------------------------------------------- 00:43:17.838 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:17.838 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:17.839 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:17.839 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:43:17.839 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:43:17.839 08:56:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:43:18.408 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:43:18.408 fio-3.35 00:43:18.408 Starting 1 thread 00:43:18.408 EAL: No free 2048 kB hugepages reported on node 1 00:43:20.948 00:43:20.948 test: (groupid=0, jobs=1): err= 0: pid=2490516: Tue Jul 23 08:56:33 2024 00:43:20.948 read: IOPS=2483, BW=38.8MiB/s (40.7MB/s)(78.2MiB/2016msec) 00:43:20.948 slat (usec): min=5, max=103, avg=14.35, stdev= 5.17 00:43:20.948 clat (usec): min=9473, max=46140, avg=30749.85, stdev=7475.91 00:43:20.948 lat (usec): min=9481, max=46156, avg=30764.20, stdev=7477.14 00:43:20.948 clat percentiles (usec): 00:43:20.948 | 1.00th=[11863], 5.00th=[15139], 10.00th=[20055], 20.00th=[24249], 00:43:20.948 | 30.00th=[28181], 40.00th=[30540], 50.00th=[32113], 60.00th=[33817], 00:43:20.948 | 70.00th=[34866], 80.00th=[36963], 90.00th=[39060], 95.00th=[41681], 00:43:20.948 | 99.00th=[44303], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:43:20.948 | 99.99th=[46400] 00:43:20.948 bw ( KiB/s): min=19488, max=23520, per=54.33%, avg=21584.00, stdev=2055.74, samples=4 00:43:20.948 iops : min= 1218, max= 1470, avg=1349.00, stdev=128.48, samples=4 00:43:20.948 write: IOPS=1415, BW=22.1MiB/s (23.2MB/s)(44.6MiB/2016msec); 0 zone resets 00:43:20.948 slat (usec): min=43, max=281, avg=95.81, stdev=21.84 00:43:20.948 clat (usec): min=14130, max=55905, avg=35826.89, stdev=6305.58 00:43:20.948 lat (usec): min=14229, max=56033, avg=35922.70, stdev=6316.12 00:43:20.948 clat percentiles (usec): 00:43:20.948 | 1.00th=[17171], 5.00th=[20055], 10.00th=[27395], 20.00th=[32375], 00:43:20.948 | 30.00th=[34866], 40.00th=[35914], 50.00th=[36963], 60.00th=[38011], 00:43:20.948 | 70.00th=[39060], 80.00th=[40633], 90.00th=[42206], 95.00th=[44303], 00:43:20.948 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50594], 99.95th=[52691], 00:43:20.948 | 99.99th=[55837] 00:43:20.948 bw ( KiB/s): min=19680, max=24576, per=99.28%, avg=22480.00, stdev=2401.28, samples=4 00:43:20.948 iops : min= 1230, max= 1536, avg=1405.00, stdev=150.08, samples=4 00:43:20.948 lat (msec) : 10=0.13%, 20=7.79%, 50=92.05%, 100=0.04% 00:43:20.948 cpu : usr=61.34%, sys=23.18%, ctx=223, majf=0, minf=1690 00:43:20.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:20.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:20.948 issued rwts: total=5006,2853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:20.948 00:43:20.948 Run status group 0 (all jobs): 00:43:20.948 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=78.2MiB (82.0MB), run=2016-2016msec 00:43:20.948 WRITE: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.6MiB (46.7MB), run=2016-2016msec 00:43:21.209 ----------------------------------------------------- 00:43:21.209 Suppressions used: 00:43:21.209 count bytes template 00:43:21.209 1 57 /usr/src/fio/parse.c 00:43:21.209 581 55776 /usr/src/fio/iolog.c 00:43:21.209 1 8 libtcmalloc_minimal.so 00:43:21.209 ----------------------------------------------------- 00:43:21.209 00:43:21.209 08:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:43:21.777 08:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 -i 10.0.0.2 00:43:25.974 Nvme0n1 00:43:25.974 08:56:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:43:28.523 08:56:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f83552a7-8d87-4630-bd38-e9609015f856 00:43:28.523 08:56:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f83552a7-8d87-4630-bd38-e9609015f856 00:43:28.523 08:56:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f83552a7-8d87-4630-bd38-e9609015f856 00:43:28.523 08:56:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:43:28.523 08:56:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:43:28.523 08:56:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:43:28.523 08:56:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:43:29.103 { 00:43:29.103 "uuid": "f83552a7-8d87-4630-bd38-e9609015f856", 00:43:29.103 "name": "lvs_0", 00:43:29.103 "base_bdev": "Nvme0n1", 00:43:29.103 "total_data_clusters": 930, 00:43:29.103 "free_clusters": 930, 00:43:29.103 "block_size": 512, 00:43:29.103 "cluster_size": 1073741824 00:43:29.103 } 00:43:29.103 ]' 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f83552a7-8d87-4630-bd38-e9609015f856") .free_clusters' 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f83552a7-8d87-4630-bd38-e9609015f856") .cluster_size' 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:43:29.103 952320 00:43:29.103 08:56:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:43:29.672 6dc14c32-6eb6-42c3-a7ab-f8bed85367c2 00:43:29.672 08:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:43:30.239 08:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:43:30.497 08:56:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:43:30.757 08:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:31.018 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:43:31.018 fio-3.35 00:43:31.018 Starting 1 thread 00:43:31.018 EAL: No free 2048 kB hugepages reported on node 1 00:43:33.558 00:43:33.558 test: (groupid=0, jobs=1): err= 0: pid=2491981: Tue Jul 23 08:56:45 2024 00:43:33.558 read: IOPS=3393, BW=13.3MiB/s (13.9MB/s)(26.7MiB/2014msec) 00:43:33.558 slat (usec): min=3, max=337, avg= 8.23, stdev= 6.23 00:43:33.558 clat (usec): min=1921, max=177990, avg=20183.96, stdev=14908.14 00:43:33.558 lat (usec): min=1932, max=178101, avg=20192.19, stdev=14909.25 00:43:33.558 clat percentiles (msec): 00:43:33.558 | 1.00th=[ 15], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:43:33.558 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 19], 60.00th=[ 20], 00:43:33.558 | 70.00th=[ 20], 80.00th=[ 21], 90.00th=[ 21], 95.00th=[ 22], 00:43:33.558 | 99.00th=[ 150], 99.50th=[ 171], 99.90th=[ 178], 99.95th=[ 178], 00:43:33.558 | 99.99th=[ 178] 00:43:33.558 bw ( KiB/s): min= 9760, max=15168, per=99.77%, avg=13542.00, stdev=2539.58, samples=4 00:43:33.558 iops : min= 2440, max= 3792, avg=3385.50, stdev=634.90, samples=4 00:43:33.558 write: IOPS=3417, BW=13.3MiB/s (14.0MB/s)(26.9MiB/2014msec); 0 zone resets 00:43:33.558 slat (usec): min=4, max=295, avg= 8.83, stdev= 5.11 00:43:33.558 clat (usec): min=594, max=171810, avg=16963.60, stdev=13771.95 00:43:33.558 lat (usec): min=605, max=171829, avg=16972.43, stdev=13773.12 00:43:33.558 clat percentiles (msec): 00:43:33.558 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:43:33.558 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:43:33.558 | 70.00th=[ 17], 80.00th=[ 17], 90.00th=[ 18], 95.00th=[ 18], 00:43:33.558 | 99.00th=[ 25], 99.50th=[ 165], 99.90th=[ 171], 99.95th=[ 171], 00:43:33.558 | 99.99th=[ 171] 00:43:33.558 bw ( KiB/s): min=10072, max=15104, per=99.76%, avg=13636.00, stdev=2384.24, samples=4 00:43:33.558 iops : min= 2518, max= 3776, avg=3409.00, stdev=596.06, samples=4 00:43:33.558 lat (usec) : 750=0.01%, 1000=0.01% 00:43:33.558 lat (msec) : 2=0.04%, 4=0.04%, 10=0.37%, 20=88.05%, 50=10.54% 00:43:33.558 lat (msec) : 250=0.93% 00:43:33.558 cpu : usr=74.37%, sys=23.70%, ctx=33, majf=0, minf=1534 00:43:33.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:33.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:33.558 issued rwts: total=6834,6882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:33.558 00:43:33.558 Run status group 0 (all jobs): 00:43:33.558 READ: bw=13.3MiB/s (13.9MB/s), 13.3MiB/s-13.3MiB/s (13.9MB/s-13.9MB/s), io=26.7MiB (28.0MB), run=2014-2014msec 00:43:33.558 WRITE: bw=13.3MiB/s (14.0MB/s), 13.3MiB/s-13.3MiB/s (14.0MB/s-14.0MB/s), io=26.9MiB (28.2MB), run=2014-2014msec 00:43:33.818 ----------------------------------------------------- 00:43:33.818 Suppressions used: 00:43:33.818 count bytes template 00:43:33.818 1 58 /usr/src/fio/parse.c 00:43:33.818 1 8 libtcmalloc_minimal.so 00:43:33.818 ----------------------------------------------------- 00:43:33.818 00:43:33.818 08:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:34.389 08:56:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:43:36.301 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fb93c747-5890-4ca3-babb-88472e1baa63 00:43:36.301 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fb93c747-5890-4ca3-babb-88472e1baa63 00:43:36.301 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=fb93c747-5890-4ca3-babb-88472e1baa63 00:43:36.301 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:43:36.301 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:43:36.301 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:43:36.301 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:43:36.561 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:43:36.561 { 00:43:36.561 "uuid": "f83552a7-8d87-4630-bd38-e9609015f856", 00:43:36.561 "name": "lvs_0", 00:43:36.561 "base_bdev": "Nvme0n1", 00:43:36.561 "total_data_clusters": 930, 00:43:36.561 "free_clusters": 0, 00:43:36.561 "block_size": 512, 00:43:36.561 "cluster_size": 1073741824 00:43:36.561 }, 00:43:36.561 { 00:43:36.561 "uuid": "fb93c747-5890-4ca3-babb-88472e1baa63", 00:43:36.561 "name": "lvs_n_0", 00:43:36.561 "base_bdev": "6dc14c32-6eb6-42c3-a7ab-f8bed85367c2", 00:43:36.561 "total_data_clusters": 237847, 00:43:36.561 "free_clusters": 237847, 00:43:36.561 "block_size": 512, 00:43:36.561 "cluster_size": 4194304 00:43:36.561 } 00:43:36.561 ]' 00:43:36.561 08:56:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fb93c747-5890-4ca3-babb-88472e1baa63") .free_clusters' 00:43:36.561 08:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:43:36.561 08:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fb93c747-5890-4ca3-babb-88472e1baa63") .cluster_size' 00:43:36.821 08:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:43:36.821 08:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:43:36.821 08:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:43:36.821 951388 00:43:36.821 08:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:43:38.205 4fc3e049-b7cd-4147-820b-7ab1e7db1434 00:43:38.463 08:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:43:38.722 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:43:38.980 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:43:39.240 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:39.240 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:39.240 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:39.240 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:39.240 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:43:39.241 08:56:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:43:39.499 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:43:39.499 fio-3.35 00:43:39.499 Starting 1 thread 00:43:39.759 EAL: No free 2048 kB hugepages reported on node 1 00:43:42.300 00:43:42.300 test: (groupid=0, jobs=1): err= 0: pid=2492967: Tue Jul 23 08:56:54 2024 00:43:42.300 read: IOPS=3288, BW=12.8MiB/s (13.5MB/s)(25.9MiB/2014msec) 00:43:42.300 slat (usec): min=3, max=332, avg= 8.84, stdev= 6.01 00:43:42.300 clat (usec): min=8160, max=33955, avg=20925.73, stdev=1918.44 00:43:42.300 lat (usec): min=8189, max=33964, avg=20934.56, stdev=1918.06 00:43:42.300 clat percentiles (usec): 00:43:42.300 | 1.00th=[16712], 5.00th=[17957], 10.00th=[18744], 20.00th=[19530], 00:43:42.300 | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], 00:43:42.300 | 70.00th=[21890], 80.00th=[22414], 90.00th=[23200], 95.00th=[23725], 00:43:42.300 | 99.00th=[25297], 99.50th=[26084], 99.90th=[32113], 99.95th=[33817], 00:43:42.300 | 99.99th=[33817] 00:43:42.300 bw ( KiB/s): min=12208, max=13840, per=99.70%, avg=13114.00, stdev=678.49, samples=4 00:43:42.300 iops : min= 3052, max= 3460, avg=3278.50, stdev=169.62, samples=4 00:43:42.300 write: IOPS=3308, BW=12.9MiB/s (13.6MB/s)(26.0MiB/2014msec); 0 zone resets 00:43:42.300 slat (usec): min=3, max=310, avg= 9.21, stdev= 4.70 00:43:42.300 clat (usec): min=4039, max=29992, avg=17491.26, stdev=1646.92 00:43:42.300 lat (usec): min=4058, max=30001, avg=17500.47, stdev=1646.79 00:43:42.300 clat percentiles (usec): 00:43:42.300 | 1.00th=[13829], 5.00th=[15270], 10.00th=[15664], 20.00th=[16319], 00:43:42.300 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:43:42.300 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19268], 95.00th=[19792], 00:43:42.300 | 99.00th=[21103], 99.50th=[22152], 99.90th=[27919], 99.95th=[29230], 00:43:42.300 | 99.99th=[30016] 00:43:42.300 bw ( KiB/s): min=12888, max=13568, per=99.78%, avg=13204.00, stdev=279.01, samples=4 00:43:42.300 iops : min= 3222, max= 3392, avg=3301.00, stdev=69.75, samples=4 00:43:42.300 lat (msec) : 10=0.17%, 20=62.75%, 50=37.08% 00:43:42.300 cpu : usr=72.73%, sys=25.14%, ctx=35, majf=0, minf=1533 00:43:42.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:42.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:42.301 issued rwts: total=6623,6663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:42.301 00:43:42.301 Run status group 0 (all jobs): 00:43:42.301 READ: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=25.9MiB (27.1MB), run=2014-2014msec 00:43:42.301 WRITE: bw=12.9MiB/s (13.6MB/s), 12.9MiB/s-12.9MiB/s (13.6MB/s-13.6MB/s), io=26.0MiB (27.3MB), run=2014-2014msec 00:43:42.561 ----------------------------------------------------- 00:43:42.561 Suppressions used: 00:43:42.561 count bytes template 00:43:42.561 1 58 /usr/src/fio/parse.c 00:43:42.561 1 8 libtcmalloc_minimal.so 00:43:42.561 ----------------------------------------------------- 00:43:42.561 00:43:42.561 08:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:43:43.502 08:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:43:43.502 08:56:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:43:48.786 08:57:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:43:48.786 08:57:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:43:52.136 08:57:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:43:52.136 08:57:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:43:54.675 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:54.675 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:43:54.675 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:43:54.675 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:54.675 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:43:54.675 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:54.676 rmmod nvme_tcp 00:43:54.676 rmmod nvme_fabrics 00:43:54.676 rmmod nvme_keyring 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2489512 ']' 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2489512 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2489512 ']' 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2489512 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2489512 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2489512' 00:43:54.676 killing process with pid 2489512 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2489512 00:43:54.676 08:57:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2489512 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:57.217 08:57:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:59.127 00:43:59.127 real 0m51.692s 00:43:59.127 user 3m16.646s 00:43:59.127 sys 0m10.185s 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:43:59.127 ************************************ 00:43:59.127 END TEST nvmf_fio_host 00:43:59.127 ************************************ 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:59.127 ************************************ 00:43:59.127 START TEST nvmf_failover 00:43:59.127 ************************************ 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:43:59.127 * Looking for test storage... 00:43:59.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:59.127 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:43:59.128 08:57:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:44:02.423 Found 0000:84:00.0 (0x8086 - 0x159b) 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:02.423 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:44:02.424 Found 0000:84:00.1 (0x8086 - 0x159b) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:44:02.424 Found net devices under 0000:84:00.0: cvl_0_0 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:44:02.424 Found net devices under 0000:84:00.1: cvl_0_1 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:02.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:02.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:44:02.424 00:44:02.424 --- 10.0.0.2 ping statistics --- 00:44:02.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.424 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:02.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:02.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:44:02.424 00:44:02.424 --- 10.0.0.1 ping statistics --- 00:44:02.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.424 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2497410 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2497410 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2497410 ']' 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:02.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:02.424 08:57:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:44:02.684 [2024-07-23 08:57:14.994727] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:02.684 [2024-07-23 08:57:14.995061] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:02.684 EAL: No free 2048 kB hugepages reported on node 1 00:44:02.943 [2024-07-23 08:57:15.280809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:03.202 [2024-07-23 08:57:15.601698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:03.202 [2024-07-23 08:57:15.601784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:03.202 [2024-07-23 08:57:15.601825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:03.203 [2024-07-23 08:57:15.601851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:03.203 [2024-07-23 08:57:15.601877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:03.203 [2024-07-23 08:57:15.602095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:44:03.203 [2024-07-23 08:57:15.602167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:03.203 [2024-07-23 08:57:15.602185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:44:04.141 08:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:04.141 08:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:44:04.141 08:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:04.141 08:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:04.141 08:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:44:04.141 08:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:04.141 08:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:04.711 [2024-07-23 08:57:16.981402] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:04.711 08:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:44:05.282 Malloc0 00:44:05.282 08:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:05.852 08:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:06.422 08:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:07.047 [2024-07-23 08:57:19.539829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:07.047 08:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:07.614 [2024-07-23 08:57:20.037716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:44:07.614 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:44:08.184 [2024-07-23 08:57:20.680111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2498041 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2498041 /var/tmp/bdevperf.sock 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2498041 ']' 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:08.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:08.444 08:57:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:44:09.827 08:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:09.827 08:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:44:09.827 08:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:10.763 NVMe0n1 00:44:10.763 08:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:11.020 00:44:11.020 08:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2498446 00:44:11.020 08:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:11.020 08:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:44:11.958 08:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:12.528 [2024-07-23 08:57:25.030456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.030985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.528 [2024-07-23 08:57:25.031214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:44:12.788 08:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:44:16.081 08:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:16.081 00:44:16.081 08:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:16.340 [2024-07-23 08:57:28.773909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:44:16.340 08:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:44:19.635 08:57:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:19.635 [2024-07-23 08:57:32.087196] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:19.635 08:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:44:21.017 08:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:44:21.017 [2024-07-23 08:57:33.401892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:44:21.017 [2024-07-23 08:57:33.401958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:44:21.017 [2024-07-23 08:57:33.401988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:44:21.017 08:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2498446 00:44:26.297 0 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2498041 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2498041 ']' 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2498041 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2498041 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2498041' 00:44:26.297 killing process with pid 2498041 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2498041 00:44:26.297 08:57:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2498041 00:44:27.723 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:44:27.723 [2024-07-23 08:57:20.866353] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:27.723 [2024-07-23 08:57:20.866690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498041 ] 00:44:27.723 EAL: No free 2048 kB hugepages reported on node 1 00:44:27.723 [2024-07-23 08:57:21.101012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.723 [2024-07-23 08:57:21.412900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.723 Running I/O for 15 seconds... 00:44:27.723 [2024-07-23 08:57:25.034657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.034738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.034812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.034851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.034888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.034919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.034953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.034984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.035017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.035049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.035083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.035112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.035145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.035175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.035208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.723 [2024-07-23 08:57:25.035237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.723 [2024-07-23 08:57:25.035269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.035956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.035989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.724 [2024-07-23 08:57:25.036601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.036663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.036724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.036784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.036843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.036904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.036964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.036995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.037024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.037061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.037091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.037122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.037151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.037182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.037212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.037243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.037272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.037303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.037343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.037375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.724 [2024-07-23 08:57:25.037405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.724 [2024-07-23 08:57:25.037437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.037955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.037986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.038941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.038972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.725 [2024-07-23 08:57:25.039601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.725 [2024-07-23 08:57:25.039673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.039712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56568 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.039742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.039781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.039809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.039848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56576 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.039877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.039906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.039929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.039953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56584 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.039980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56592 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56600 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56608 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56616 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56624 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56632 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56640 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56648 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56656 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.040926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.040951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.040974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.040998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56664 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.041096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56672 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.041212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56680 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.041324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56688 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.041434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56696 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.041533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56704 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.041631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56712 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.726 [2024-07-23 08:57:25.041730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56720 len:8 PRP1 0x0 PRP2 0x0 00:44:27.726 [2024-07-23 08:57:25.041756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.726 [2024-07-23 08:57:25.041781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.726 [2024-07-23 08:57:25.041804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.041827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56728 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.041853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.041880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.041902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.041931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56736 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.041960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.041987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56744 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56752 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56760 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56768 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56776 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56784 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56792 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56800 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.042902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56808 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.042930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.042959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.042982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56816 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56824 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56832 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56840 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56848 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56856 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56864 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56872 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56880 len:8 PRP1 0x0 PRP2 0x0 00:44:27.727 [2024-07-23 08:57:25.043855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.727 [2024-07-23 08:57:25.043881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.727 [2024-07-23 08:57:25.043904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.727 [2024-07-23 08:57:25.043928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56888 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.043954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.043981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56896 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56904 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56912 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56128 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56136 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56144 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56152 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56160 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56168 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.044897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.728 [2024-07-23 08:57:25.044919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.728 [2024-07-23 08:57:25.044942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56176 len:8 PRP1 0x0 PRP2 0x0 00:44:27.728 [2024-07-23 08:57:25.044968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.045374] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f7f00 was disconnected and freed. reset controller. 00:44:27.728 [2024-07-23 08:57:25.045416] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:44:27.728 [2024-07-23 08:57:25.045490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.728 [2024-07-23 08:57:25.045527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.045560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.728 [2024-07-23 08:57:25.045597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.045627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.728 [2024-07-23 08:57:25.045654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.045690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.728 [2024-07-23 08:57:25.045718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:25.045743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:27.728 [2024-07-23 08:57:25.045843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:27.728 [2024-07-23 08:57:25.051047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:27.728 [2024-07-23 08:57:25.118235] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:27.728 [2024-07-23 08:57:28.774842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.728 [2024-07-23 08:57:28.774924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:28.774981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.728 [2024-07-23 08:57:28.775014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:28.775050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.728 [2024-07-23 08:57:28.775081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.728 [2024-07-23 08:57:28.775114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.728 [2024-07-23 08:57:28.775148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.729 [2024-07-23 08:57:28.775213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.729 [2024-07-23 08:57:28.775275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.729 [2024-07-23 08:57:28.775349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.729 [2024-07-23 08:57:28.775410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.729 [2024-07-23 08:57:28.775482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.729 [2024-07-23 08:57:28.775544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.729 [2024-07-23 08:57:28.775604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.775667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.775727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.775814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.775875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.775939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.775970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.775999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.776955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.776983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.777013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.777047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.777078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.777107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.777138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.777167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.777198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.777226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.777257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.777286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.729 [2024-07-23 08:57:28.777326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.729 [2024-07-23 08:57:28.777358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.777944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.777973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.778962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.778993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.730 [2024-07-23 08:57:28.779499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.730 [2024-07-23 08:57:28.779529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.779569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.779599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.779631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.779660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.779692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.779721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.779771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.779801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.779832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.779861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.779893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.779921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.779952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.779980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.731 [2024-07-23 08:57:28.780611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.780717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72408 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.780746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.780810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.780834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72416 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.780861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.780911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.780934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72424 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.780959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.780985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72432 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.781062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.781089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.781160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.781186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72448 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.781257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.781283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72456 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.781365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.781391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72464 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.781472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.781499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.781571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.781597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72480 len:8 PRP1 0x0 PRP2 0x0 00:44:27.731 [2024-07-23 08:57:28.781667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.731 [2024-07-23 08:57:28.781693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.731 [2024-07-23 08:57:28.781715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.731 [2024-07-23 08:57:28.781738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72488 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.781763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.781789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.781812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.781840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72496 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.781867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.781893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.781915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.781939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72504 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.781965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.781991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72512 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72520 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72528 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72536 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72544 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72552 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72560 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72568 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72576 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.782904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.782926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.782949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72584 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.782975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72592 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72600 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72608 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72616 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72632 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72640 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.732 [2024-07-23 08:57:28.783763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.732 [2024-07-23 08:57:28.783787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72648 len:8 PRP1 0x0 PRP2 0x0 00:44:27.732 [2024-07-23 08:57:28.783812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.732 [2024-07-23 08:57:28.783838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.733 [2024-07-23 08:57:28.783861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.733 [2024-07-23 08:57:28.783885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72656 len:8 PRP1 0x0 PRP2 0x0 00:44:27.733 [2024-07-23 08:57:28.783912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.783939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.733 [2024-07-23 08:57:28.783962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.733 [2024-07-23 08:57:28.783986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72664 len:8 PRP1 0x0 PRP2 0x0 00:44:27.733 [2024-07-23 08:57:28.784012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.784039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.733 [2024-07-23 08:57:28.784062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.733 [2024-07-23 08:57:28.784086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72672 len:8 PRP1 0x0 PRP2 0x0 00:44:27.733 [2024-07-23 08:57:28.784112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.784137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.733 [2024-07-23 08:57:28.784159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.733 [2024-07-23 08:57:28.784183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72680 len:8 PRP1 0x0 PRP2 0x0 00:44:27.733 [2024-07-23 08:57:28.784215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.784243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.733 [2024-07-23 08:57:28.784265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.733 [2024-07-23 08:57:28.784290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72688 len:8 PRP1 0x0 PRP2 0x0 00:44:27.733 [2024-07-23 08:57:28.784324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.784353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.733 [2024-07-23 08:57:28.784376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.733 [2024-07-23 08:57:28.784399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72696 len:8 PRP1 0x0 PRP2 0x0 00:44:27.733 [2024-07-23 08:57:28.784433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.784810] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8180 was disconnected and freed. reset controller. 00:44:27.733 [2024-07-23 08:57:28.784850] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:44:27.733 [2024-07-23 08:57:28.784923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:28.784959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.784991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:28.785018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.785047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:28.785075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.785103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:28.785130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:28.785156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:27.733 [2024-07-23 08:57:28.785259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:27.733 [2024-07-23 08:57:28.790391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:27.733 [2024-07-23 08:57:28.897885] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:27.733 [2024-07-23 08:57:33.400678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:33.400787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.400825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:33.400857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.400899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:33.400929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.400958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:27.733 [2024-07-23 08:57:33.400985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.401013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:27.733 [2024-07-23 08:57:33.402525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.402573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.402627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.402658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.402691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.402721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.402753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.402782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.402812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.402840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.402872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.402900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.402931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.402959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.402992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.403022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.403056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.403084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.403115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.733 [2024-07-23 08:57:33.403143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.733 [2024-07-23 08:57:33.403175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.403932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.403962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.404942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.404970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.405002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.405031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.405062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.405090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.405121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.405150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.734 [2024-07-23 08:57:33.405181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.734 [2024-07-23 08:57:33.405209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.735 [2024-07-23 08:57:33.405269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.405955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.405988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.735 [2024-07-23 08:57:33.406755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.406965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.406993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.407024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.407052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.407084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.407119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.407153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.407182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.407213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.407242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.407273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.735 [2024-07-23 08:57:33.407303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.735 [2024-07-23 08:57:33.407345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.407956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.407985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.408966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.408998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.736 [2024-07-23 08:57:33.409568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.736 [2024-07-23 08:57:33.409598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.409630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.409660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.409692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.409721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.409753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.409782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.409815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.409844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.409876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.409905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.409938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.409967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.409999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.410028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.410089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.410150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.410211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:27.737 [2024-07-23 08:57:33.410276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.737 [2024-07-23 08:57:33.410348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.737 [2024-07-23 08:57:33.410409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:27.737 [2024-07-23 08:57:33.410469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:27.737 [2024-07-23 08:57:33.410556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:27.737 [2024-07-23 08:57:33.410584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126760 len:8 PRP1 0x0 PRP2 0x0 00:44:27.737 [2024-07-23 08:57:33.410611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:27.737 [2024-07-23 08:57:33.410985] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f8900 was disconnected and freed. reset controller. 00:44:27.737 [2024-07-23 08:57:33.411024] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:44:27.737 [2024-07-23 08:57:33.411056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:27.737 [2024-07-23 08:57:33.411149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:27.737 [2024-07-23 08:57:33.416322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:27.737 [2024-07-23 08:57:33.639214] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:27.737 00:44:27.737 Latency(us) 00:44:27.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:27.737 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:27.737 Verification LBA range: start 0x0 length 0x4000 00:44:27.737 NVMe0n1 : 15.02 4590.66 17.93 472.81 0.00 25229.95 1159.02 30098.01 00:44:27.737 =================================================================================================================== 00:44:27.737 Total : 4590.66 17.93 472.81 0.00 25229.95 1159.02 30098.01 00:44:27.737 Received shutdown signal, test time was about 15.000000 seconds 00:44:27.737 00:44:27.737 Latency(us) 00:44:27.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:27.737 =================================================================================================================== 00:44:27.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2500166 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2500166 /var/tmp/bdevperf.sock 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2500166 ']' 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:27.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:27.737 08:57:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:44:29.117 08:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:29.117 08:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:44:29.117 08:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:44:29.375 [2024-07-23 08:57:41.696475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:44:29.375 08:57:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:44:29.633 [2024-07-23 08:57:41.997629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:44:29.633 08:57:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:30.202 NVMe0n1 00:44:30.203 08:57:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:31.142 00:44:31.142 08:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:31.402 00:44:31.402 08:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:31.402 08:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:44:31.971 08:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:32.231 08:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:44:35.526 08:57:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:35.526 08:57:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:44:35.786 08:57:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2501085 00:44:35.786 08:57:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:35.786 08:57:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2501085 00:44:37.167 0 00:44:37.167 08:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:44:37.167 [2024-07-23 08:57:40.202160] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:37.167 [2024-07-23 08:57:40.202427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500166 ] 00:44:37.167 EAL: No free 2048 kB hugepages reported on node 1 00:44:37.167 [2024-07-23 08:57:40.437605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.167 [2024-07-23 08:57:40.749118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:37.167 [2024-07-23 08:57:44.597565] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:44:37.167 [2024-07-23 08:57:44.597731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:37.167 [2024-07-23 08:57:44.597778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:37.167 [2024-07-23 08:57:44.597826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:37.167 [2024-07-23 08:57:44.597856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:37.168 [2024-07-23 08:57:44.597886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:37.168 [2024-07-23 08:57:44.597914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:37.168 [2024-07-23 08:57:44.597943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:37.168 [2024-07-23 08:57:44.597971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:37.168 [2024-07-23 08:57:44.597999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:37.168 [2024-07-23 08:57:44.598140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:37.168 [2024-07-23 08:57:44.598208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:37.168 [2024-07-23 08:57:44.771619] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:37.168 Running I/O for 1 seconds... 00:44:37.168 00:44:37.168 Latency(us) 00:44:37.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:37.168 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:37.168 Verification LBA range: start 0x0 length 0x4000 00:44:37.168 NVMe0n1 : 1.03 4610.32 18.01 0.00 0.00 27608.84 5922.51 24660.95 00:44:37.168 =================================================================================================================== 00:44:37.168 Total : 4610.32 18.01 0.00 0.00 27608.84 5922.51 24660.95 00:44:37.168 08:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:37.168 08:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:44:37.737 08:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:37.994 08:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:37.994 08:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:44:38.252 08:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:38.511 08:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:44:41.805 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:44:41.805 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:44:41.805 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2500166 00:44:41.805 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2500166 ']' 00:44:41.805 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2500166 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2500166 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2500166' 00:44:42.064 killing process with pid 2500166 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2500166 00:44:42.064 08:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2500166 00:44:43.445 08:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:44:43.445 08:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:43.709 rmmod nvme_tcp 00:44:43.709 rmmod nvme_fabrics 00:44:43.709 rmmod nvme_keyring 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2497410 ']' 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2497410 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2497410 ']' 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2497410 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2497410 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2497410' 00:44:43.709 killing process with pid 2497410 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2497410 00:44:43.709 08:57:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2497410 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:45.622 08:57:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:48.165 08:58:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:48.165 00:44:48.165 real 0m48.817s 00:44:48.165 user 2m51.212s 00:44:48.165 sys 0m8.535s 00:44:48.165 08:58:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:48.165 08:58:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:44:48.165 ************************************ 00:44:48.165 END TEST nvmf_failover 00:44:48.166 ************************************ 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:44:48.166 ************************************ 00:44:48.166 START TEST nvmf_host_discovery 00:44:48.166 ************************************ 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:44:48.166 * Looking for test storage... 00:44:48.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:44:48.166 08:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:44:51.463 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:44:51.464 Found 0000:84:00.0 (0x8086 - 0x159b) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:44:51.464 Found 0000:84:00.1 (0x8086 - 0x159b) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:44:51.464 Found net devices under 0000:84:00.0: cvl_0_0 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:44:51.464 Found net devices under 0000:84:00.1: cvl_0_1 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:51.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:51.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:44:51.464 00:44:51.464 --- 10.0.0.2 ping statistics --- 00:44:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.464 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:51.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:51.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:44:51.464 00:44:51.464 --- 10.0.0.1 ping statistics --- 00:44:51.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.464 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:44:51.464 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2504210 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2504210 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2504210 ']' 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:51.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:51.465 08:58:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:51.465 [2024-07-23 08:58:03.828555] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:51.465 [2024-07-23 08:58:03.828732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:51.465 EAL: No free 2048 kB hugepages reported on node 1 00:44:51.724 [2024-07-23 08:58:03.999340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:51.984 [2024-07-23 08:58:04.314689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:51.984 [2024-07-23 08:58:04.314779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:51.984 [2024-07-23 08:58:04.314814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:51.984 [2024-07-23 08:58:04.314846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:51.984 [2024-07-23 08:58:04.314873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:51.984 [2024-07-23 08:58:04.314940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:52.924 [2024-07-23 08:58:05.312136] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:52.924 [2024-07-23 08:58:05.320432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:52.924 null0 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:52.924 null1 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2504368 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2504368 /tmp/host.sock 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2504368 ']' 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:44:52.924 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:52.924 08:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:53.184 [2024-07-23 08:58:05.542080] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:53.184 [2024-07-23 08:58:05.542398] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504368 ] 00:44:53.184 EAL: No free 2048 kB hugepages reported on node 1 00:44:53.444 [2024-07-23 08:58:05.750049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:53.703 [2024-07-23 08:58:06.063706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:54.644 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:54.905 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:55.166 [2024-07-23 08:58:07.510825] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:55.166 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:44:55.426 08:58:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:44:55.686 [2024-07-23 08:58:08.087546] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:44:55.686 [2024-07-23 08:58:08.087628] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:44:55.686 [2024-07-23 08:58:08.087694] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:44:55.686 [2024-07-23 08:58:08.173973] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:44:55.945 [2024-07-23 08:58:08.238792] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:44:55.945 [2024-07-23 08:58:08.238840] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:44:56.516 08:58:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:56.516 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:56.775 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:56.776 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:57.036 [2024-07-23 08:58:09.401711] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:44:57.036 [2024-07-23 08:58:09.402751] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:44:57.036 [2024-07-23 08:58:09.402831] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:57.036 [2024-07-23 08:58:09.489378] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:44:57.036 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:57.296 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:44:57.296 08:58:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:44:57.296 [2024-07-23 08:58:09.592473] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:44:57.296 [2024-07-23 08:58:09.592518] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:44:57.296 [2024-07-23 08:58:09.592541] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.233 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.495 [2024-07-23 08:58:10.762807] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:44:58.495 [2024-07-23 08:58:10.762894] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:58.495 [2024-07-23 08:58:10.771755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:58.495 [2024-07-23 08:58:10.771825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:58.495 [2024-07-23 08:58:10.771871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:58.495 [2024-07-23 08:58:10.771900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:58.495 [2024-07-23 08:58:10.771929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:58.495 [2024-07-23 08:58:10.771957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:58.495 [2024-07-23 08:58:10.771985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:58.495 [2024-07-23 08:58:10.772013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:58.495 [2024-07-23 08:58:10.772040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:58.495 [2024-07-23 08:58:10.781761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:58.495 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.495 [2024-07-23 08:58:10.791830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:58.495 [2024-07-23 08:58:10.792242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.495 [2024-07-23 08:58:10.792297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7280 with addr=10.0.0.2, port=4420 00:44:58.495 [2024-07-23 08:58:10.792344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:58.495 [2024-07-23 08:58:10.792402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:58.495 [2024-07-23 08:58:10.792450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:58.495 [2024-07-23 08:58:10.792481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:58.495 [2024-07-23 08:58:10.792511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:58.495 [2024-07-23 08:58:10.792568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:58.495 [2024-07-23 08:58:10.801970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:58.495 [2024-07-23 08:58:10.802329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.495 [2024-07-23 08:58:10.802379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7280 with addr=10.0.0.2, port=4420 00:44:58.495 [2024-07-23 08:58:10.802411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:58.495 [2024-07-23 08:58:10.802454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:58.495 [2024-07-23 08:58:10.802494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:58.495 [2024-07-23 08:58:10.802522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:58.495 [2024-07-23 08:58:10.802547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:58.495 [2024-07-23 08:58:10.802586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:58.495 [2024-07-23 08:58:10.812100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:58.495 [2024-07-23 08:58:10.812414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.495 [2024-07-23 08:58:10.812465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7280 with addr=10.0.0.2, port=4420 00:44:58.495 [2024-07-23 08:58:10.812497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:58.495 [2024-07-23 08:58:10.812541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:58.495 [2024-07-23 08:58:10.812581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:58.495 [2024-07-23 08:58:10.812609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:58.495 [2024-07-23 08:58:10.812634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:58.495 [2024-07-23 08:58:10.812672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:58.495 [2024-07-23 08:58:10.822221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:58.495 [2024-07-23 08:58:10.822566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.495 [2024-07-23 08:58:10.822615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7280 with addr=10.0.0.2, port=4420 00:44:58.495 [2024-07-23 08:58:10.822646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:58.495 [2024-07-23 08:58:10.822689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:58.495 [2024-07-23 08:58:10.822729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:58.495 [2024-07-23 08:58:10.822765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:58.495 [2024-07-23 08:58:10.822791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:58.495 [2024-07-23 08:58:10.822830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:58.495 [2024-07-23 08:58:10.832343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:58.495 [2024-07-23 08:58:10.832666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.495 [2024-07-23 08:58:10.832714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7280 with addr=10.0.0.2, port=4420 00:44:58.495 [2024-07-23 08:58:10.832744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:58.495 [2024-07-23 08:58:10.832787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:58.495 [2024-07-23 08:58:10.832827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:58.495 [2024-07-23 08:58:10.832855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:58.495 [2024-07-23 08:58:10.832879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:58.495 [2024-07-23 08:58:10.832918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:58.495 [2024-07-23 08:58:10.842461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:58.495 [2024-07-23 08:58:10.842780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:58.495 [2024-07-23 08:58:10.842832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7280 with addr=10.0.0.2, port=4420 00:44:58.495 [2024-07-23 08:58:10.842864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7280 is same with the state(5) to be set 00:44:58.495 [2024-07-23 08:58:10.842907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:44:58.496 [2024-07-23 08:58:10.842948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:58.496 [2024-07-23 08:58:10.842987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:44:58.496 [2024-07-23 08:58:10.843012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:58.496 [2024-07-23 08:58:10.843051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:44:58.496 [2024-07-23 08:58:10.849159] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:44:58.496 [2024-07-23 08:58:10.849217] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:44:58.496 08:58:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:44:58.757 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:44:59.017 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:44:59.018 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:44:59.018 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:59.018 08:58:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:44:59.955 [2024-07-23 08:58:12.377914] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:44:59.955 [2024-07-23 08:58:12.377972] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:44:59.955 [2024-07-23 08:58:12.378031] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:45:00.224 [2024-07-23 08:58:12.505545] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:45:00.224 [2024-07-23 08:58:12.574722] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:45:00.224 [2024-07-23 08:58:12.574819] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:00.224 request: 00:45:00.224 { 00:45:00.224 "name": "nvme", 00:45:00.224 "trtype": "tcp", 00:45:00.224 "traddr": "10.0.0.2", 00:45:00.224 "adrfam": "ipv4", 00:45:00.224 "trsvcid": "8009", 00:45:00.224 "hostnqn": "nqn.2021-12.io.spdk:test", 00:45:00.224 "wait_for_attach": true, 00:45:00.224 "method": "bdev_nvme_start_discovery", 00:45:00.224 "req_id": 1 00:45:00.224 } 00:45:00.224 Got JSON-RPC error response 00:45:00.224 response: 00:45:00.224 { 00:45:00.224 "code": -17, 00:45:00.224 "message": "File exists" 00:45:00.224 } 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:45:00.224 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:00.499 request: 00:45:00.499 { 00:45:00.499 "name": "nvme_second", 00:45:00.499 "trtype": "tcp", 00:45:00.499 "traddr": "10.0.0.2", 00:45:00.499 "adrfam": "ipv4", 00:45:00.499 "trsvcid": "8009", 00:45:00.499 "hostnqn": "nqn.2021-12.io.spdk:test", 00:45:00.499 "wait_for_attach": true, 00:45:00.499 "method": "bdev_nvme_start_discovery", 00:45:00.499 "req_id": 1 00:45:00.499 } 00:45:00.499 Got JSON-RPC error response 00:45:00.499 response: 00:45:00.499 { 00:45:00.499 "code": -17, 00:45:00.499 "message": "File exists" 00:45:00.499 } 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:45:00.499 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:00.500 08:58:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:01.453 [2024-07-23 08:58:13.919150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:01.453 [2024-07-23 08:58:13.919239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f8400 with addr=10.0.0.2, port=8010 00:45:01.453 [2024-07-23 08:58:13.919356] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:45:01.453 [2024-07-23 08:58:13.919393] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:01.453 [2024-07-23 08:58:13.919420] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:45:02.836 [2024-07-23 08:58:14.921557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:45:02.836 [2024-07-23 08:58:14.921631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f8680 with addr=10.0.0.2, port=8010 00:45:02.836 [2024-07-23 08:58:14.921722] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:45:02.836 [2024-07-23 08:58:14.921752] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:02.836 [2024-07-23 08:58:14.921777] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:45:03.406 [2024-07-23 08:58:15.923518] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:45:03.406 request: 00:45:03.406 { 00:45:03.406 "name": "nvme_second", 00:45:03.406 "trtype": "tcp", 00:45:03.406 "traddr": "10.0.0.2", 00:45:03.667 "adrfam": "ipv4", 00:45:03.667 "trsvcid": "8010", 00:45:03.667 "hostnqn": "nqn.2021-12.io.spdk:test", 00:45:03.667 "wait_for_attach": false, 00:45:03.667 "attach_timeout_ms": 3000, 00:45:03.667 "method": "bdev_nvme_start_discovery", 00:45:03.667 "req_id": 1 00:45:03.667 } 00:45:03.667 Got JSON-RPC error response 00:45:03.667 response: 00:45:03.667 { 00:45:03.667 "code": -110, 00:45:03.667 "message": "Connection timed out" 00:45:03.667 } 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:03.667 08:58:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2504368 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:03.667 rmmod nvme_tcp 00:45:03.667 rmmod nvme_fabrics 00:45:03.667 rmmod nvme_keyring 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2504210 ']' 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2504210 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2504210 ']' 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2504210 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2504210 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2504210' 00:45:03.667 killing process with pid 2504210 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2504210 00:45:03.667 08:58:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2504210 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:05.577 08:58:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:07.488 00:45:07.488 real 0m19.688s 00:45:07.488 user 0m29.854s 00:45:07.488 sys 0m4.939s 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:45:07.488 ************************************ 00:45:07.488 END TEST nvmf_host_discovery 00:45:07.488 ************************************ 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:45:07.488 ************************************ 00:45:07.488 START TEST nvmf_host_multipath_status 00:45:07.488 ************************************ 00:45:07.488 08:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:45:07.749 * Looking for test storage... 00:45:07.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:45:07.749 08:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:45:11.042 Found 0000:84:00.0 (0x8086 - 0x159b) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:45:11.042 Found 0000:84:00.1 (0x8086 - 0x159b) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:45:11.042 Found net devices under 0000:84:00.0: cvl_0_0 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:45:11.042 Found net devices under 0000:84:00.1: cvl_0_1 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:11.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:11.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:45:11.042 00:45:11.042 --- 10.0.0.2 ping statistics --- 00:45:11.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:11.042 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:45:11.042 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:11.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:11.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:45:11.042 00:45:11.042 --- 10.0.0.1 ping statistics --- 00:45:11.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:11.042 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2507924 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2507924 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2507924 ']' 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:11.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:11.043 08:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:45:11.043 [2024-07-23 08:58:23.425390] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:11.043 [2024-07-23 08:58:23.425569] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:11.043 EAL: No free 2048 kB hugepages reported on node 1 00:45:11.304 [2024-07-23 08:58:23.646852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:11.873 [2024-07-23 08:58:24.123993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:11.873 [2024-07-23 08:58:24.124124] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:11.873 [2024-07-23 08:58:24.124199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:11.873 [2024-07-23 08:58:24.124245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:11.873 [2024-07-23 08:58:24.124293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:11.873 [2024-07-23 08:58:24.124480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:11.873 [2024-07-23 08:58:24.124488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2507924 00:45:12.442 08:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:45:13.010 [2024-07-23 08:58:25.449918] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:13.010 08:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:45:13.579 Malloc0 00:45:13.580 08:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:45:14.149 08:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:14.720 08:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:15.290 [2024-07-23 08:58:27.748170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:15.290 08:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:45:16.230 [2024-07-23 08:58:28.394131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2508475 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2508475 /var/tmp/bdevperf.sock 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2508475 ']' 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:16.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:16.230 08:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:45:17.626 08:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:17.626 08:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:45:17.626 08:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:45:18.196 08:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:45:19.135 Nvme0n1 00:45:19.135 08:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:45:20.073 Nvme0n1 00:45:20.073 08:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:45:20.073 08:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:45:21.981 08:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:45:21.981 08:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:45:22.552 08:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:45:23.121 08:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:45:24.505 08:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:45:24.505 08:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:45:24.505 08:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:24.505 08:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:45:25.075 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:25.075 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:45:25.075 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:45:25.075 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:25.644 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:25.644 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:45:25.644 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:25.644 08:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:45:25.644 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:25.644 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:45:25.644 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:25.644 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:45:26.213 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:26.213 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:45:26.213 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:26.213 08:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:45:27.153 08:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:27.153 08:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:45:27.153 08:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:27.153 08:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:45:27.411 08:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:27.411 08:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:45:27.411 08:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:45:27.671 08:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:45:28.610 08:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:45:29.549 08:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:45:29.549 08:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:45:29.549 08:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:29.549 08:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:45:30.118 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:30.118 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:45:30.119 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:30.119 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:45:30.687 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:30.687 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:45:30.687 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:30.687 08:58:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:45:30.947 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:30.947 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:45:30.947 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:30.947 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:45:31.205 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:31.205 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:45:31.205 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:31.205 08:58:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:45:31.774 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:31.774 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:45:31.774 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:31.774 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:45:32.032 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:32.032 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:45:32.032 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:45:32.323 08:58:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:45:32.895 08:58:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:45:34.277 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:45:34.277 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:45:34.277 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:34.277 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:45:34.537 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:34.537 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:45:34.537 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:34.537 08:58:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:45:35.107 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:35.108 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:45:35.108 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:35.108 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:45:35.366 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:35.366 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:45:35.366 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:35.366 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:45:35.625 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:35.625 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:45:35.625 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:35.625 08:58:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:45:35.885 08:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:35.885 08:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:45:35.885 08:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:35.885 08:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:45:36.455 08:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:36.455 08:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:45:36.455 08:58:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:45:37.025 08:58:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:45:37.966 08:58:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:45:38.906 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:45:38.906 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:45:38.906 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:38.906 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:45:39.477 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:39.477 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:45:39.477 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:39.477 08:58:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:45:40.046 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:40.046 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:45:40.046 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:40.046 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:45:40.616 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:40.616 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:45:40.616 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:40.616 08:58:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:45:41.191 08:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:41.191 08:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:45:41.191 08:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:41.191 08:58:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:45:41.762 08:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:41.762 08:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:45:41.762 08:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:41.762 08:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:45:42.332 08:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:42.332 08:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:45:42.332 08:58:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:45:43.271 08:58:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:45:43.531 08:58:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:45:44.912 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:45:44.912 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:45:44.912 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:44.912 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:45:45.171 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:45.171 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:45:45.171 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:45.171 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:45:45.429 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:45.429 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:45:45.429 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:45.429 08:58:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:45:45.687 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:45.687 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:45:45.687 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:45.687 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:45:46.653 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:46.653 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:45:46.653 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:46.653 08:58:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:45:46.917 08:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:46.917 08:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:45:46.917 08:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:46.917 08:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:45:47.520 08:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:47.520 08:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:45:47.520 08:58:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:45:48.458 08:59:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:45:49.027 08:59:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:45:49.964 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:45:49.964 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:45:49.964 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:49.964 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:45:50.222 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:50.222 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:45:50.222 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:50.222 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:45:50.481 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:50.481 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:45:50.481 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:50.481 08:59:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:45:51.049 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:51.049 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:45:51.049 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:51.049 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:45:51.618 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:51.618 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:45:51.618 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:51.618 08:59:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:45:52.188 08:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:45:52.189 08:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:45:52.189 08:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:52.189 08:59:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:45:52.758 08:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:52.758 08:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:45:53.018 08:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:45:53.018 08:59:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:45:53.589 08:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:45:54.529 08:59:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:45:55.470 08:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:45:55.470 08:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:45:55.470 08:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:55.470 08:59:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:45:56.041 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:56.041 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:45:56.041 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:56.041 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:45:56.611 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:56.611 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:45:56.611 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:56.611 08:59:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:45:56.870 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:56.870 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:45:56.870 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:56.870 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:45:57.812 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:57.812 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:45:57.812 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:57.812 08:59:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:45:58.382 08:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:58.382 08:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:45:58.382 08:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:45:58.382 08:59:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:45:58.952 08:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:45:58.952 08:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:45:58.952 08:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:45:59.521 08:59:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:46:00.090 08:59:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:46:01.029 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:46:01.029 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:46:01.029 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:01.029 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:46:01.608 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:46:01.608 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:46:01.608 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:46:01.608 08:59:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:02.197 08:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:02.197 08:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:46:02.197 08:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:02.197 08:59:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:46:02.770 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:02.770 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:46:02.770 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:02.770 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:46:03.369 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:03.369 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:46:03.369 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:03.369 08:59:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:46:03.935 08:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:03.935 08:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:46:03.935 08:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:03.935 08:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:46:04.502 08:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:04.502 08:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:46:04.502 08:59:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:46:04.760 08:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:46:05.326 08:59:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:46:06.260 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:46:06.260 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:46:06.260 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:06.260 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:46:06.518 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:06.518 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:46:06.518 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:06.518 08:59:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:46:06.776 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:06.776 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:46:06.776 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:06.776 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:46:07.342 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:07.342 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:46:07.342 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:07.342 08:59:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:46:07.909 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:07.909 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:46:07.909 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:07.909 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:46:08.476 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:08.476 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:46:08.476 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:08.476 08:59:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:46:09.041 08:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:09.042 08:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:46:09.042 08:59:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:46:09.997 08:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:46:10.563 08:59:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:46:11.494 08:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:46:11.494 08:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:46:11.494 08:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:11.494 08:59:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:46:12.060 08:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:12.060 08:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:46:12.060 08:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:12.060 08:59:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:46:12.626 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:46:12.626 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:46:12.626 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:12.626 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:46:13.559 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:13.559 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:46:13.559 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:46:13.559 08:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:13.817 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:13.817 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:46:13.817 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:13.817 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:46:14.749 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:46:14.749 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:46:14.749 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:46:14.749 08:59:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2508475 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2508475 ']' 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2508475 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2508475 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2508475' 00:46:15.313 killing process with pid 2508475 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2508475 00:46:15.313 08:59:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2508475 00:46:15.887 Connection closed with partial response: 00:46:15.887 00:46:15.887 00:46:16.477 08:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2508475 00:46:16.477 08:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:46:16.477 [2024-07-23 08:58:28.602576] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:46:16.477 [2024-07-23 08:58:28.602915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508475 ] 00:46:16.477 EAL: No free 2048 kB hugepages reported on node 1 00:46:16.477 [2024-07-23 08:58:28.843891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.477 [2024-07-23 08:58:29.155261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:46:16.477 Running I/O for 90 seconds... 00:46:16.477 [2024-07-23 08:58:55.415133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.477 [2024-07-23 08:58:55.415249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:46:16.477 [2024-07-23 08:58:55.416640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.477 [2024-07-23 08:58:55.416688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:46:16.477 [2024-07-23 08:58:55.416747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.477 [2024-07-23 08:58:55.416783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:16.477 [2024-07-23 08:58:55.416835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.477 [2024-07-23 08:58:55.416870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:16.477 [2024-07-23 08:58:55.416921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.477 [2024-07-23 08:58:55.416956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.417925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.417976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.418926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.418959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.419012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.419097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.419130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.419182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.419216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.419267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.419301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.419364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.419399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.419452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.419486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.419538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.419572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.420385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.420433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.420500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.420536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.420591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.420626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.420682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.420717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.420772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.478 [2024-07-23 08:58:55.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:46:16.478 [2024-07-23 08:58:55.420861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.420896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.420952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.420987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.421910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.421966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.422919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.422954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.423927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.423961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.424027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.424064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.424123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.424158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.424216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.424252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.424325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.424363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:46:16.479 [2024-07-23 08:58:55.424422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.479 [2024-07-23 08:58:55.424458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.424517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.424552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.424611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.424646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.424705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.424740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.424798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.424834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.424893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.424929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.424987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.425908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.425967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.426939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.426974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.427032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.427068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:58:55.427128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:58:55.427166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.838815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.838928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:46:16.480 [2024-07-23 08:59:22.839659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.480 [2024-07-23 08:59:22.839693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.839741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.839775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.839822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.839856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.839906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.839941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.839990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.840937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.840972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.841055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.841139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.841221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.841305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.481 [2024-07-23 08:59:22.841409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.481 [2024-07-23 08:59:22.841493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.481 [2024-07-23 08:59:22.841576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.481 [2024-07-23 08:59:22.841659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.481 [2024-07-23 08:59:22.841742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.841826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.841909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.841958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.841992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.842042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.842076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.845785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.845835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.845896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.845933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.845984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.846019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.846077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.846113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.846163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.846198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.846249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.846284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.846343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.846381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.846432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.481 [2024-07-23 08:59:22.846468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:46:16.481 [2024-07-23 08:59:22.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.846554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.846604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.846639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.846689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.846725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.846775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.846809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.846859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.846894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.846946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.846982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.847409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.847510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.847600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.847684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.847769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.847852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.847936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.847985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.848020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.848105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.848189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.848273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.848371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.482 [2024-07-23 08:59:22.848457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.482 [2024-07-23 08:59:22.848550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.482 [2024-07-23 08:59:22.848638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.482 [2024-07-23 08:59:22.848721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.482 [2024-07-23 08:59:22.848823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.482 [2024-07-23 08:59:22.848911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.848960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.482 [2024-07-23 08:59:22.848995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:16.482 [2024-07-23 08:59:22.849045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.482 [2024-07-23 08:59:22.849080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.849776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.849859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.849907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.849941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.850560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.850603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.850661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.850696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.850746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.850780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.850830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.850867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.850917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.850952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.851035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.851117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.851198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.851926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.851959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.852042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.852126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.852208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.852295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.852393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.852475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.852557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.483 [2024-07-23 08:59:22.852638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:46:16.483 [2024-07-23 08:59:22.852686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.483 [2024-07-23 08:59:22.852719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.852768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.852802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.852850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.852885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.852933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.852968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.853018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.853053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.853101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.853135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.853183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.853218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.853265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.853305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.853369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.853404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.855098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.855190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.855274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.855372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.855456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.855539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.855625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.855707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.855789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.855870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.855918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.855959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.856011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.856045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.856093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.856127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.856175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.856210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.856259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.856294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.856998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.857589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.484 [2024-07-23 08:59:22.857844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.857925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.857973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.858007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.858055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.484 [2024-07-23 08:59:22.858089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:46:16.484 [2024-07-23 08:59:22.858137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.858515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.858615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.858919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.858953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.862842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.862888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.862947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.862984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.863068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.863928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.863976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.864011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.864060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.864094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.864142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.864177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.864224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.864259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.864319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.864357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.864408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.485 [2024-07-23 08:59:22.864443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.869953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.485 [2024-07-23 08:59:22.869988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:46:16.485 [2024-07-23 08:59:22.870036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.870920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.870969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.871004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.871716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.871928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.871962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.872045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.872128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.872211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.872294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.872390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.872472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.872555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.872639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.872728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.486 [2024-07-23 08:59:22.872812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.872895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.872942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.872976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.873025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.873059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.873107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.486 [2024-07-23 08:59:22.873141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:46:16.486 [2024-07-23 08:59:22.873189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.487 [2024-07-23 08:59:22.873224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.873271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.487 [2024-07-23 08:59:22.873306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.873365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.487 [2024-07-23 08:59:22.873400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.873450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.487 [2024-07-23 08:59:22.873485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.878777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.878828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.878889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.878926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.878984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.879020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.879069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.879104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.879153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.879187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.879238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.879273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.879332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.879379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:46:16.487 [2024-07-23 08:59:22.879429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.487 [2024-07-23 08:59:22.879464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.879513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.879547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.879595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.879630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.879678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.879713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.879762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.879796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.879845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.879880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.879929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.879963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.880052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.880138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.880220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.880302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.880398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.880483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.880565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.880647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.880731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.880814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.880896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.880945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.880980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.881070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.881156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.881239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.881332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.881417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.881501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.881585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.881669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.488 [2024-07-23 08:59:22.881751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.488 [2024-07-23 08:59:22.881834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:46:16.488 [2024-07-23 08:59:22.881882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.881916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.881964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.881998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.882778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.882861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.882944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.882993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.883028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.883075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.883110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.883164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.883199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.883247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.883281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.883341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.889422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.889519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.889605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.889688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.889772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.889857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.889939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.889988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.890022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.890107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.890197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.890281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.890377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.890461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.890544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.890626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.890710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.489 [2024-07-23 08:59:22.890793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.890875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.890925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.489 [2024-07-23 08:59:22.890959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:46:16.489 [2024-07-23 08:59:22.891008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.891041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.891124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.891215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.891300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.891400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.891483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.891566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.891650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.891734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.891817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.891901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.891950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.891984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.892507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.892590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.892919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.892970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.893004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.893052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.893086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.893134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.893168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.893217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.893251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.893316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.893367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.894786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.490 [2024-07-23 08:59:22.894831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.894889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.894984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.895039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.895075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.895122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.895157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.895205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.895240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.895288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.895339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.895391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.895426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.895475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.895509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:46:16.490 [2024-07-23 08:59:22.895557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.490 [2024-07-23 08:59:22.895592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.895639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.895674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.895723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.895758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.895816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.895851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.895900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.895935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.895984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.896018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.896066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.896100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.896149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.896183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.898363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.898503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.898591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.898674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.898759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.898840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.898922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.898979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.899014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.899533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.899782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.899865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.899915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.899951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.900040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.900294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:46:16.491 [2024-07-23 08:59:22.900640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:46:16.491 [2024-07-23 08:59:22.900851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.491 [2024-07-23 08:59:22.900886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:46:16.492 [2024-07-23 08:59:22.900933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.492 [2024-07-23 08:59:22.900967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:46:16.492 [2024-07-23 08:59:22.901015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.492 [2024-07-23 08:59:22.901055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:46:16.492 [2024-07-23 08:59:22.901104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.492 [2024-07-23 08:59:22.901139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:46:16.492 [2024-07-23 08:59:22.901186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.492 [2024-07-23 08:59:22.901220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:46:16.492 [2024-07-23 08:59:22.901270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:46:16.492 [2024-07-23 08:59:22.901305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:46:16.492 Received shutdown signal, test time was about 54.965006 seconds 00:46:16.492 00:46:16.492 Latency(us) 00:46:16.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:16.492 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:16.492 Verification LBA range: start 0x0 length 0x4000 00:46:16.492 Nvme0n1 : 54.96 4523.43 17.67 0.00 0.00 28253.99 497.59 6039797.76 00:46:16.492 =================================================================================================================== 00:46:16.492 Total : 4523.43 17.67 0.00 0.00 28253.99 497.59 6039797.76 00:46:16.492 08:59:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:17.057 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:17.315 rmmod nvme_tcp 00:46:17.315 rmmod nvme_fabrics 00:46:17.315 rmmod nvme_keyring 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2507924 ']' 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2507924 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2507924 ']' 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2507924 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2507924 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2507924' 00:46:17.315 killing process with pid 2507924 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2507924 00:46:17.315 08:59:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2507924 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:19.846 08:59:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:21.747 08:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:46:21.747 00:46:21.747 real 1m14.008s 00:46:21.747 user 3m53.521s 00:46:21.747 sys 0m17.787s 00:46:21.747 08:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:21.747 08:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:46:21.747 ************************************ 00:46:21.747 END TEST nvmf_host_multipath_status 00:46:21.747 ************************************ 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:46:21.747 ************************************ 00:46:21.747 START TEST nvmf_discovery_remove_ifc 00:46:21.747 ************************************ 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:46:21.747 * Looking for test storage... 00:46:21.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:21.747 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:46:21.748 08:59:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:46:25.036 Found 0000:84:00.0 (0x8086 - 0x159b) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:46:25.036 Found 0000:84:00.1 (0x8086 - 0x159b) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:46:25.036 Found net devices under 0000:84:00.0: cvl_0_0 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:46:25.036 Found net devices under 0000:84:00.1: cvl_0_1 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:25.036 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:46:25.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:25.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:46:25.037 00:46:25.037 --- 10.0.0.2 ping statistics --- 00:46:25.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:25.037 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:25.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:25.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:46:25.037 00:46:25.037 --- 10.0.0.1 ping statistics --- 00:46:25.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:25.037 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2517538 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2517538 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2517538 ']' 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:25.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:25.037 08:59:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:25.297 [2024-07-23 08:59:37.677954] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:46:25.297 [2024-07-23 08:59:37.678293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:25.556 EAL: No free 2048 kB hugepages reported on node 1 00:46:25.556 [2024-07-23 08:59:37.958467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:25.815 [2024-07-23 08:59:38.275818] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:25.815 [2024-07-23 08:59:38.275901] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:25.815 [2024-07-23 08:59:38.275934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:25.815 [2024-07-23 08:59:38.275964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:25.815 [2024-07-23 08:59:38.275991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:25.815 [2024-07-23 08:59:38.276055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:26.751 08:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:26.751 08:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:46:26.751 08:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:26.751 08:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:26.751 08:59:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:26.751 [2024-07-23 08:59:39.023848] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:26.751 [2024-07-23 08:59:39.032120] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:46:26.751 null0 00:46:26.751 [2024-07-23 08:59:39.064612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2517692 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2517692 /tmp/host.sock 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2517692 ']' 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:46:26.751 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:26.751 08:59:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:26.751 [2024-07-23 08:59:39.199925] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:46:26.751 [2024-07-23 08:59:39.200114] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517692 ] 00:46:27.010 EAL: No free 2048 kB hugepages reported on node 1 00:46:27.010 [2024-07-23 08:59:39.379488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:27.268 [2024-07-23 08:59:39.697667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.204 08:59:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:28.771 08:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.771 08:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:46:28.771 08:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.771 08:59:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:29.706 [2024-07-23 08:59:42.200709] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:46:29.706 [2024-07-23 08:59:42.200769] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:46:29.706 [2024-07-23 08:59:42.200829] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:46:29.965 [2024-07-23 08:59:42.329330] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:46:29.965 [2024-07-23 08:59:42.431135] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:46:29.965 [2024-07-23 08:59:42.431247] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:46:29.965 [2024-07-23 08:59:42.431372] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:46:29.965 [2024-07-23 08:59:42.431431] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:46:29.965 [2024-07-23 08:59:42.431493] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:29.965 [2024-07-23 08:59:42.436976] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f7780 was disconnected and freed. delete nvme_qpair. 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:46:29.965 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:46:30.224 08:59:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:31.157 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.415 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:46:31.415 08:59:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:46:32.356 08:59:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:33.289 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.547 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:46:33.547 08:59:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:46:34.483 08:59:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:35.417 [2024-07-23 08:59:47.874210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:46:35.417 [2024-07-23 08:59:47.874336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:46:35.417 [2024-07-23 08:59:47.874381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:35.417 [2024-07-23 08:59:47.874420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:46:35.417 [2024-07-23 08:59:47.874449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:35.418 [2024-07-23 08:59:47.874479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:46:35.418 [2024-07-23 08:59:47.874507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:35.418 [2024-07-23 08:59:47.874536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:46:35.418 [2024-07-23 08:59:47.874565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:35.418 [2024-07-23 08:59:47.874596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:46:35.418 [2024-07-23 08:59:47.874636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:35.418 [2024-07-23 08:59:47.874666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:46:35.418 [2024-07-23 08:59:47.884217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:46:35.418 [2024-07-23 08:59:47.894277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:46:35.418 08:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:35.418 08:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:35.418 08:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:35.418 08:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:35.418 08:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:35.418 08:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:35.418 08:59:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:36.791 [2024-07-23 08:59:48.943460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:46:36.791 [2024-07-23 08:59:48.943588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:46:36.791 [2024-07-23 08:59:48.943639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:46:36.791 [2024-07-23 08:59:48.943727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:46:36.791 [2024-07-23 08:59:48.944626] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:46:36.791 [2024-07-23 08:59:48.944687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:36.791 [2024-07-23 08:59:48.944730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:46:36.791 [2024-07-23 08:59:48.944764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:36.791 [2024-07-23 08:59:48.944828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:36.792 [2024-07-23 08:59:48.944864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:46:36.792 08:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.792 08:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:46:36.792 08:59:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:37.728 [2024-07-23 08:59:49.947455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:37.728 [2024-07-23 08:59:49.947540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:37.728 [2024-07-23 08:59:49.947573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:46:37.728 [2024-07-23 08:59:49.947603] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:46:37.728 [2024-07-23 08:59:49.947662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:46:37.728 [2024-07-23 08:59:49.947742] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:46:37.728 [2024-07-23 08:59:49.947838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:46:37.728 [2024-07-23 08:59:49.947891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:37.728 [2024-07-23 08:59:49.947932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:46:37.728 [2024-07-23 08:59:49.947963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:37.728 [2024-07-23 08:59:49.947993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:46:37.728 [2024-07-23 08:59:49.948023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:37.728 [2024-07-23 08:59:49.948053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:46:37.728 [2024-07-23 08:59:49.948082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:37.728 [2024-07-23 08:59:49.948112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:46:37.728 [2024-07-23 08:59:49.948140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:37.728 [2024-07-23 08:59:49.948168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:46:37.728 [2024-07-23 08:59:49.948269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7280 (9): Bad file descriptor 00:46:37.728 [2024-07-23 08:59:49.949263] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:46:37.728 [2024-07-23 08:59:49.949306] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:37.728 08:59:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:46:37.728 08:59:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:46:39.105 08:59:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:46:39.672 [2024-07-23 08:59:52.006521] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:46:39.672 [2024-07-23 08:59:52.006568] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:46:39.672 [2024-07-23 08:59:52.006630] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:46:39.672 [2024-07-23 08:59:52.092943] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:46:39.672 [2024-07-23 08:59:52.158495] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:46:39.672 [2024-07-23 08:59:52.158589] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:46:39.672 [2024-07-23 08:59:52.158692] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:46:39.672 [2024-07-23 08:59:52.158752] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:46:39.672 [2024-07-23 08:59:52.158786] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:46:39.931 [2024-07-23 08:59:52.205846] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f7f00 was disconnected and freed. delete nvme_qpair. 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2517692 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2517692 ']' 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2517692 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2517692 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2517692' 00:46:39.931 killing process with pid 2517692 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2517692 00:46:39.931 08:59:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2517692 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:41.308 rmmod nvme_tcp 00:46:41.308 rmmod nvme_fabrics 00:46:41.308 rmmod nvme_keyring 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2517538 ']' 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2517538 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2517538 ']' 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2517538 00:46:41.308 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:46:41.567 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:41.567 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2517538 00:46:41.567 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:46:41.567 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:46:41.567 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2517538' 00:46:41.567 killing process with pid 2517538 00:46:41.567 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2517538 00:46:41.567 08:59:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2517538 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:43.468 08:59:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:46:45.375 00:46:45.375 real 0m23.499s 00:46:45.375 user 0m33.362s 00:46:45.375 sys 0m4.843s 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:46:45.375 ************************************ 00:46:45.375 END TEST nvmf_discovery_remove_ifc 00:46:45.375 ************************************ 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:46:45.375 ************************************ 00:46:45.375 START TEST nvmf_identify_kernel_target 00:46:45.375 ************************************ 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:46:45.375 * Looking for test storage... 00:46:45.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:45.375 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:46:45.376 08:59:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:46:48.670 Found 0000:84:00.0 (0x8086 - 0x159b) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:46:48.670 Found 0000:84:00.1 (0x8086 - 0x159b) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:46:48.670 Found net devices under 0000:84:00.0: cvl_0_0 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:48.670 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:46:48.671 Found net devices under 0000:84:00.1: cvl_0_1 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:48.671 09:00:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:46:48.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:48.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:46:48.671 00:46:48.671 --- 10.0.0.2 ping statistics --- 00:46:48.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:48.671 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:48.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:48.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:46:48.671 00:46:48.671 --- 10.0.0.1 ping statistics --- 00:46:48.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:48.671 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:48.671 09:00:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:50.625 Waiting for block devices as requested 00:46:50.625 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:46:50.625 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:50.886 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:50.886 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:51.146 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:51.146 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:51.146 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:51.146 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:51.405 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:51.405 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:51.405 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:51.665 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:51.665 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:51.665 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:51.925 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:51.925 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:52.185 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:52.185 No valid GPT data, bailing 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:52.185 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:46:52.445 00:46:52.445 Discovery Log Number of Records 2, Generation counter 2 00:46:52.445 =====Discovery Log Entry 0====== 00:46:52.445 trtype: tcp 00:46:52.445 adrfam: ipv4 00:46:52.445 subtype: current discovery subsystem 00:46:52.445 treq: not specified, sq flow control disable supported 00:46:52.445 portid: 1 00:46:52.445 trsvcid: 4420 00:46:52.445 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:52.445 traddr: 10.0.0.1 00:46:52.445 eflags: none 00:46:52.445 sectype: none 00:46:52.445 =====Discovery Log Entry 1====== 00:46:52.445 trtype: tcp 00:46:52.445 adrfam: ipv4 00:46:52.445 subtype: nvme subsystem 00:46:52.445 treq: not specified, sq flow control disable supported 00:46:52.445 portid: 1 00:46:52.445 trsvcid: 4420 00:46:52.445 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:52.445 traddr: 10.0.0.1 00:46:52.445 eflags: none 00:46:52.445 sectype: none 00:46:52.445 09:00:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:46:52.445 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:46:52.705 EAL: No free 2048 kB hugepages reported on node 1 00:46:52.705 ===================================================== 00:46:52.705 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:46:52.705 ===================================================== 00:46:52.705 Controller Capabilities/Features 00:46:52.705 ================================ 00:46:52.705 Vendor ID: 0000 00:46:52.705 Subsystem Vendor ID: 0000 00:46:52.705 Serial Number: 29e3213d655ad2002716 00:46:52.705 Model Number: Linux 00:46:52.705 Firmware Version: 6.7.0-68 00:46:52.705 Recommended Arb Burst: 0 00:46:52.705 IEEE OUI Identifier: 00 00 00 00:46:52.705 Multi-path I/O 00:46:52.705 May have multiple subsystem ports: No 00:46:52.705 May have multiple controllers: No 00:46:52.705 Associated with SR-IOV VF: No 00:46:52.705 Max Data Transfer Size: Unlimited 00:46:52.705 Max Number of Namespaces: 0 00:46:52.705 Max Number of I/O Queues: 1024 00:46:52.705 NVMe Specification Version (VS): 1.3 00:46:52.705 NVMe Specification Version (Identify): 1.3 00:46:52.705 Maximum Queue Entries: 1024 00:46:52.705 Contiguous Queues Required: No 00:46:52.705 Arbitration Mechanisms Supported 00:46:52.705 Weighted Round Robin: Not Supported 00:46:52.705 Vendor Specific: Not Supported 00:46:52.705 Reset Timeout: 7500 ms 00:46:52.705 Doorbell Stride: 4 bytes 00:46:52.705 NVM Subsystem Reset: Not Supported 00:46:52.705 Command Sets Supported 00:46:52.705 NVM Command Set: Supported 00:46:52.705 Boot Partition: Not Supported 00:46:52.705 Memory Page Size Minimum: 4096 bytes 00:46:52.705 Memory Page Size Maximum: 4096 bytes 00:46:52.705 Persistent Memory Region: Not Supported 00:46:52.705 Optional Asynchronous Events Supported 00:46:52.705 Namespace Attribute Notices: Not Supported 00:46:52.705 Firmware Activation Notices: Not Supported 00:46:52.705 ANA Change Notices: Not Supported 00:46:52.705 PLE Aggregate Log Change Notices: Not Supported 00:46:52.705 LBA Status Info Alert Notices: Not Supported 00:46:52.705 EGE Aggregate Log Change Notices: Not Supported 00:46:52.705 Normal NVM Subsystem Shutdown event: Not Supported 00:46:52.705 Zone Descriptor Change Notices: Not Supported 00:46:52.705 Discovery Log Change Notices: Supported 00:46:52.705 Controller Attributes 00:46:52.705 128-bit Host Identifier: Not Supported 00:46:52.705 Non-Operational Permissive Mode: Not Supported 00:46:52.705 NVM Sets: Not Supported 00:46:52.705 Read Recovery Levels: Not Supported 00:46:52.705 Endurance Groups: Not Supported 00:46:52.705 Predictable Latency Mode: Not Supported 00:46:52.705 Traffic Based Keep ALive: Not Supported 00:46:52.705 Namespace Granularity: Not Supported 00:46:52.705 SQ Associations: Not Supported 00:46:52.705 UUID List: Not Supported 00:46:52.705 Multi-Domain Subsystem: Not Supported 00:46:52.705 Fixed Capacity Management: Not Supported 00:46:52.705 Variable Capacity Management: Not Supported 00:46:52.705 Delete Endurance Group: Not Supported 00:46:52.705 Delete NVM Set: Not Supported 00:46:52.705 Extended LBA Formats Supported: Not Supported 00:46:52.705 Flexible Data Placement Supported: Not Supported 00:46:52.705 00:46:52.705 Controller Memory Buffer Support 00:46:52.705 ================================ 00:46:52.705 Supported: No 00:46:52.705 00:46:52.705 Persistent Memory Region Support 00:46:52.705 ================================ 00:46:52.705 Supported: No 00:46:52.705 00:46:52.705 Admin Command Set Attributes 00:46:52.705 ============================ 00:46:52.705 Security Send/Receive: Not Supported 00:46:52.705 Format NVM: Not Supported 00:46:52.705 Firmware Activate/Download: Not Supported 00:46:52.705 Namespace Management: Not Supported 00:46:52.705 Device Self-Test: Not Supported 00:46:52.705 Directives: Not Supported 00:46:52.705 NVMe-MI: Not Supported 00:46:52.705 Virtualization Management: Not Supported 00:46:52.705 Doorbell Buffer Config: Not Supported 00:46:52.705 Get LBA Status Capability: Not Supported 00:46:52.705 Command & Feature Lockdown Capability: Not Supported 00:46:52.705 Abort Command Limit: 1 00:46:52.705 Async Event Request Limit: 1 00:46:52.705 Number of Firmware Slots: N/A 00:46:52.705 Firmware Slot 1 Read-Only: N/A 00:46:52.705 Firmware Activation Without Reset: N/A 00:46:52.705 Multiple Update Detection Support: N/A 00:46:52.705 Firmware Update Granularity: No Information Provided 00:46:52.705 Per-Namespace SMART Log: No 00:46:52.705 Asymmetric Namespace Access Log Page: Not Supported 00:46:52.705 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:46:52.705 Command Effects Log Page: Not Supported 00:46:52.705 Get Log Page Extended Data: Supported 00:46:52.705 Telemetry Log Pages: Not Supported 00:46:52.705 Persistent Event Log Pages: Not Supported 00:46:52.705 Supported Log Pages Log Page: May Support 00:46:52.706 Commands Supported & Effects Log Page: Not Supported 00:46:52.706 Feature Identifiers & Effects Log Page:May Support 00:46:52.706 NVMe-MI Commands & Effects Log Page: May Support 00:46:52.706 Data Area 4 for Telemetry Log: Not Supported 00:46:52.706 Error Log Page Entries Supported: 1 00:46:52.706 Keep Alive: Not Supported 00:46:52.706 00:46:52.706 NVM Command Set Attributes 00:46:52.706 ========================== 00:46:52.706 Submission Queue Entry Size 00:46:52.706 Max: 1 00:46:52.706 Min: 1 00:46:52.706 Completion Queue Entry Size 00:46:52.706 Max: 1 00:46:52.706 Min: 1 00:46:52.706 Number of Namespaces: 0 00:46:52.706 Compare Command: Not Supported 00:46:52.706 Write Uncorrectable Command: Not Supported 00:46:52.706 Dataset Management Command: Not Supported 00:46:52.706 Write Zeroes Command: Not Supported 00:46:52.706 Set Features Save Field: Not Supported 00:46:52.706 Reservations: Not Supported 00:46:52.706 Timestamp: Not Supported 00:46:52.706 Copy: Not Supported 00:46:52.706 Volatile Write Cache: Not Present 00:46:52.706 Atomic Write Unit (Normal): 1 00:46:52.706 Atomic Write Unit (PFail): 1 00:46:52.706 Atomic Compare & Write Unit: 1 00:46:52.706 Fused Compare & Write: Not Supported 00:46:52.706 Scatter-Gather List 00:46:52.706 SGL Command Set: Supported 00:46:52.706 SGL Keyed: Not Supported 00:46:52.706 SGL Bit Bucket Descriptor: Not Supported 00:46:52.706 SGL Metadata Pointer: Not Supported 00:46:52.706 Oversized SGL: Not Supported 00:46:52.706 SGL Metadata Address: Not Supported 00:46:52.706 SGL Offset: Supported 00:46:52.706 Transport SGL Data Block: Not Supported 00:46:52.706 Replay Protected Memory Block: Not Supported 00:46:52.706 00:46:52.706 Firmware Slot Information 00:46:52.706 ========================= 00:46:52.706 Active slot: 0 00:46:52.706 00:46:52.706 00:46:52.706 Error Log 00:46:52.706 ========= 00:46:52.706 00:46:52.706 Active Namespaces 00:46:52.706 ================= 00:46:52.706 Discovery Log Page 00:46:52.706 ================== 00:46:52.706 Generation Counter: 2 00:46:52.706 Number of Records: 2 00:46:52.706 Record Format: 0 00:46:52.706 00:46:52.706 Discovery Log Entry 0 00:46:52.706 ---------------------- 00:46:52.706 Transport Type: 3 (TCP) 00:46:52.706 Address Family: 1 (IPv4) 00:46:52.706 Subsystem Type: 3 (Current Discovery Subsystem) 00:46:52.706 Entry Flags: 00:46:52.706 Duplicate Returned Information: 0 00:46:52.706 Explicit Persistent Connection Support for Discovery: 0 00:46:52.706 Transport Requirements: 00:46:52.706 Secure Channel: Not Specified 00:46:52.706 Port ID: 1 (0x0001) 00:46:52.706 Controller ID: 65535 (0xffff) 00:46:52.706 Admin Max SQ Size: 32 00:46:52.706 Transport Service Identifier: 4420 00:46:52.706 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:46:52.706 Transport Address: 10.0.0.1 00:46:52.706 Discovery Log Entry 1 00:46:52.706 ---------------------- 00:46:52.706 Transport Type: 3 (TCP) 00:46:52.706 Address Family: 1 (IPv4) 00:46:52.706 Subsystem Type: 2 (NVM Subsystem) 00:46:52.706 Entry Flags: 00:46:52.706 Duplicate Returned Information: 0 00:46:52.706 Explicit Persistent Connection Support for Discovery: 0 00:46:52.706 Transport Requirements: 00:46:52.706 Secure Channel: Not Specified 00:46:52.706 Port ID: 1 (0x0001) 00:46:52.706 Controller ID: 65535 (0xffff) 00:46:52.706 Admin Max SQ Size: 32 00:46:52.706 Transport Service Identifier: 4420 00:46:52.706 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:46:52.706 Transport Address: 10.0.0.1 00:46:52.706 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:52.966 EAL: No free 2048 kB hugepages reported on node 1 00:46:52.966 get_feature(0x01) failed 00:46:52.966 get_feature(0x02) failed 00:46:52.966 get_feature(0x04) failed 00:46:52.966 ===================================================== 00:46:52.966 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:52.966 ===================================================== 00:46:52.966 Controller Capabilities/Features 00:46:52.966 ================================ 00:46:52.966 Vendor ID: 0000 00:46:52.966 Subsystem Vendor ID: 0000 00:46:52.966 Serial Number: 9bd474e3a0944d3ad527 00:46:52.966 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:46:52.966 Firmware Version: 6.7.0-68 00:46:52.966 Recommended Arb Burst: 6 00:46:52.966 IEEE OUI Identifier: 00 00 00 00:46:52.966 Multi-path I/O 00:46:52.966 May have multiple subsystem ports: Yes 00:46:52.966 May have multiple controllers: Yes 00:46:52.966 Associated with SR-IOV VF: No 00:46:52.966 Max Data Transfer Size: Unlimited 00:46:52.966 Max Number of Namespaces: 1024 00:46:52.966 Max Number of I/O Queues: 128 00:46:52.966 NVMe Specification Version (VS): 1.3 00:46:52.966 NVMe Specification Version (Identify): 1.3 00:46:52.966 Maximum Queue Entries: 1024 00:46:52.966 Contiguous Queues Required: No 00:46:52.966 Arbitration Mechanisms Supported 00:46:52.966 Weighted Round Robin: Not Supported 00:46:52.966 Vendor Specific: Not Supported 00:46:52.966 Reset Timeout: 7500 ms 00:46:52.966 Doorbell Stride: 4 bytes 00:46:52.966 NVM Subsystem Reset: Not Supported 00:46:52.966 Command Sets Supported 00:46:52.966 NVM Command Set: Supported 00:46:52.966 Boot Partition: Not Supported 00:46:52.966 Memory Page Size Minimum: 4096 bytes 00:46:52.966 Memory Page Size Maximum: 4096 bytes 00:46:52.966 Persistent Memory Region: Not Supported 00:46:52.966 Optional Asynchronous Events Supported 00:46:52.966 Namespace Attribute Notices: Supported 00:46:52.966 Firmware Activation Notices: Not Supported 00:46:52.966 ANA Change Notices: Supported 00:46:52.966 PLE Aggregate Log Change Notices: Not Supported 00:46:52.966 LBA Status Info Alert Notices: Not Supported 00:46:52.966 EGE Aggregate Log Change Notices: Not Supported 00:46:52.966 Normal NVM Subsystem Shutdown event: Not Supported 00:46:52.966 Zone Descriptor Change Notices: Not Supported 00:46:52.966 Discovery Log Change Notices: Not Supported 00:46:52.966 Controller Attributes 00:46:52.966 128-bit Host Identifier: Supported 00:46:52.966 Non-Operational Permissive Mode: Not Supported 00:46:52.966 NVM Sets: Not Supported 00:46:52.966 Read Recovery Levels: Not Supported 00:46:52.966 Endurance Groups: Not Supported 00:46:52.966 Predictable Latency Mode: Not Supported 00:46:52.966 Traffic Based Keep ALive: Supported 00:46:52.966 Namespace Granularity: Not Supported 00:46:52.966 SQ Associations: Not Supported 00:46:52.966 UUID List: Not Supported 00:46:52.966 Multi-Domain Subsystem: Not Supported 00:46:52.966 Fixed Capacity Management: Not Supported 00:46:52.966 Variable Capacity Management: Not Supported 00:46:52.966 Delete Endurance Group: Not Supported 00:46:52.966 Delete NVM Set: Not Supported 00:46:52.966 Extended LBA Formats Supported: Not Supported 00:46:52.967 Flexible Data Placement Supported: Not Supported 00:46:52.967 00:46:52.967 Controller Memory Buffer Support 00:46:52.967 ================================ 00:46:52.967 Supported: No 00:46:52.967 00:46:52.967 Persistent Memory Region Support 00:46:52.967 ================================ 00:46:52.967 Supported: No 00:46:52.967 00:46:52.967 Admin Command Set Attributes 00:46:52.967 ============================ 00:46:52.967 Security Send/Receive: Not Supported 00:46:52.967 Format NVM: Not Supported 00:46:52.967 Firmware Activate/Download: Not Supported 00:46:52.967 Namespace Management: Not Supported 00:46:52.967 Device Self-Test: Not Supported 00:46:52.967 Directives: Not Supported 00:46:52.967 NVMe-MI: Not Supported 00:46:52.967 Virtualization Management: Not Supported 00:46:52.967 Doorbell Buffer Config: Not Supported 00:46:52.967 Get LBA Status Capability: Not Supported 00:46:52.967 Command & Feature Lockdown Capability: Not Supported 00:46:52.967 Abort Command Limit: 4 00:46:52.967 Async Event Request Limit: 4 00:46:52.967 Number of Firmware Slots: N/A 00:46:52.967 Firmware Slot 1 Read-Only: N/A 00:46:52.967 Firmware Activation Without Reset: N/A 00:46:52.967 Multiple Update Detection Support: N/A 00:46:52.967 Firmware Update Granularity: No Information Provided 00:46:52.967 Per-Namespace SMART Log: Yes 00:46:52.967 Asymmetric Namespace Access Log Page: Supported 00:46:52.967 ANA Transition Time : 10 sec 00:46:52.967 00:46:52.967 Asymmetric Namespace Access Capabilities 00:46:52.967 ANA Optimized State : Supported 00:46:52.967 ANA Non-Optimized State : Supported 00:46:52.967 ANA Inaccessible State : Supported 00:46:52.967 ANA Persistent Loss State : Supported 00:46:52.967 ANA Change State : Supported 00:46:52.967 ANAGRPID is not changed : No 00:46:52.967 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:46:52.967 00:46:52.967 ANA Group Identifier Maximum : 128 00:46:52.967 Number of ANA Group Identifiers : 128 00:46:52.967 Max Number of Allowed Namespaces : 1024 00:46:52.967 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:46:52.967 Command Effects Log Page: Supported 00:46:52.967 Get Log Page Extended Data: Supported 00:46:52.967 Telemetry Log Pages: Not Supported 00:46:52.967 Persistent Event Log Pages: Not Supported 00:46:52.967 Supported Log Pages Log Page: May Support 00:46:52.967 Commands Supported & Effects Log Page: Not Supported 00:46:52.967 Feature Identifiers & Effects Log Page:May Support 00:46:52.967 NVMe-MI Commands & Effects Log Page: May Support 00:46:52.967 Data Area 4 for Telemetry Log: Not Supported 00:46:52.967 Error Log Page Entries Supported: 128 00:46:52.967 Keep Alive: Supported 00:46:52.967 Keep Alive Granularity: 1000 ms 00:46:52.967 00:46:52.967 NVM Command Set Attributes 00:46:52.967 ========================== 00:46:52.967 Submission Queue Entry Size 00:46:52.967 Max: 64 00:46:52.967 Min: 64 00:46:52.967 Completion Queue Entry Size 00:46:52.967 Max: 16 00:46:52.967 Min: 16 00:46:52.967 Number of Namespaces: 1024 00:46:52.967 Compare Command: Not Supported 00:46:52.967 Write Uncorrectable Command: Not Supported 00:46:52.967 Dataset Management Command: Supported 00:46:52.967 Write Zeroes Command: Supported 00:46:52.967 Set Features Save Field: Not Supported 00:46:52.967 Reservations: Not Supported 00:46:52.967 Timestamp: Not Supported 00:46:52.967 Copy: Not Supported 00:46:52.967 Volatile Write Cache: Present 00:46:52.967 Atomic Write Unit (Normal): 1 00:46:52.967 Atomic Write Unit (PFail): 1 00:46:52.967 Atomic Compare & Write Unit: 1 00:46:52.967 Fused Compare & Write: Not Supported 00:46:52.967 Scatter-Gather List 00:46:52.967 SGL Command Set: Supported 00:46:52.967 SGL Keyed: Not Supported 00:46:52.967 SGL Bit Bucket Descriptor: Not Supported 00:46:52.967 SGL Metadata Pointer: Not Supported 00:46:52.967 Oversized SGL: Not Supported 00:46:52.967 SGL Metadata Address: Not Supported 00:46:52.967 SGL Offset: Supported 00:46:52.967 Transport SGL Data Block: Not Supported 00:46:52.967 Replay Protected Memory Block: Not Supported 00:46:52.967 00:46:52.967 Firmware Slot Information 00:46:52.967 ========================= 00:46:52.967 Active slot: 0 00:46:52.967 00:46:52.967 Asymmetric Namespace Access 00:46:52.967 =========================== 00:46:52.967 Change Count : 0 00:46:52.967 Number of ANA Group Descriptors : 1 00:46:52.967 ANA Group Descriptor : 0 00:46:52.967 ANA Group ID : 1 00:46:52.967 Number of NSID Values : 1 00:46:52.967 Change Count : 0 00:46:52.967 ANA State : 1 00:46:52.967 Namespace Identifier : 1 00:46:52.967 00:46:52.967 Commands Supported and Effects 00:46:52.967 ============================== 00:46:52.967 Admin Commands 00:46:52.967 -------------- 00:46:52.967 Get Log Page (02h): Supported 00:46:52.967 Identify (06h): Supported 00:46:52.967 Abort (08h): Supported 00:46:52.967 Set Features (09h): Supported 00:46:52.967 Get Features (0Ah): Supported 00:46:52.967 Asynchronous Event Request (0Ch): Supported 00:46:52.967 Keep Alive (18h): Supported 00:46:52.967 I/O Commands 00:46:52.967 ------------ 00:46:52.967 Flush (00h): Supported 00:46:52.967 Write (01h): Supported LBA-Change 00:46:52.967 Read (02h): Supported 00:46:52.967 Write Zeroes (08h): Supported LBA-Change 00:46:52.967 Dataset Management (09h): Supported 00:46:52.967 00:46:52.967 Error Log 00:46:52.967 ========= 00:46:52.967 Entry: 0 00:46:52.967 Error Count: 0x3 00:46:52.967 Submission Queue Id: 0x0 00:46:52.967 Command Id: 0x5 00:46:52.967 Phase Bit: 0 00:46:52.967 Status Code: 0x2 00:46:52.967 Status Code Type: 0x0 00:46:52.967 Do Not Retry: 1 00:46:52.967 Error Location: 0x28 00:46:52.967 LBA: 0x0 00:46:52.967 Namespace: 0x0 00:46:52.967 Vendor Log Page: 0x0 00:46:52.967 ----------- 00:46:52.967 Entry: 1 00:46:52.967 Error Count: 0x2 00:46:52.967 Submission Queue Id: 0x0 00:46:52.967 Command Id: 0x5 00:46:52.967 Phase Bit: 0 00:46:52.967 Status Code: 0x2 00:46:52.967 Status Code Type: 0x0 00:46:52.967 Do Not Retry: 1 00:46:52.967 Error Location: 0x28 00:46:52.967 LBA: 0x0 00:46:52.967 Namespace: 0x0 00:46:52.967 Vendor Log Page: 0x0 00:46:52.967 ----------- 00:46:52.967 Entry: 2 00:46:52.967 Error Count: 0x1 00:46:52.967 Submission Queue Id: 0x0 00:46:52.967 Command Id: 0x4 00:46:52.967 Phase Bit: 0 00:46:52.967 Status Code: 0x2 00:46:52.967 Status Code Type: 0x0 00:46:52.967 Do Not Retry: 1 00:46:52.967 Error Location: 0x28 00:46:52.967 LBA: 0x0 00:46:52.967 Namespace: 0x0 00:46:52.967 Vendor Log Page: 0x0 00:46:52.967 00:46:52.967 Number of Queues 00:46:52.967 ================ 00:46:52.967 Number of I/O Submission Queues: 128 00:46:52.967 Number of I/O Completion Queues: 128 00:46:52.967 00:46:52.967 ZNS Specific Controller Data 00:46:52.967 ============================ 00:46:52.967 Zone Append Size Limit: 0 00:46:52.967 00:46:52.967 00:46:52.967 Active Namespaces 00:46:52.967 ================= 00:46:52.967 get_feature(0x05) failed 00:46:52.967 Namespace ID:1 00:46:52.967 Command Set Identifier: NVM (00h) 00:46:52.968 Deallocate: Supported 00:46:52.968 Deallocated/Unwritten Error: Not Supported 00:46:52.968 Deallocated Read Value: Unknown 00:46:52.968 Deallocate in Write Zeroes: Not Supported 00:46:52.968 Deallocated Guard Field: 0xFFFF 00:46:52.968 Flush: Supported 00:46:52.968 Reservation: Not Supported 00:46:52.968 Namespace Sharing Capabilities: Multiple Controllers 00:46:52.968 Size (in LBAs): 1953525168 (931GiB) 00:46:52.968 Capacity (in LBAs): 1953525168 (931GiB) 00:46:52.968 Utilization (in LBAs): 1953525168 (931GiB) 00:46:52.968 UUID: 3cfe5481-cb3d-4c5f-9934-6341313db541 00:46:52.968 Thin Provisioning: Not Supported 00:46:52.968 Per-NS Atomic Units: Yes 00:46:52.968 Atomic Boundary Size (Normal): 0 00:46:52.968 Atomic Boundary Size (PFail): 0 00:46:52.968 Atomic Boundary Offset: 0 00:46:52.968 NGUID/EUI64 Never Reused: No 00:46:52.968 ANA group ID: 1 00:46:52.968 Namespace Write Protected: No 00:46:52.968 Number of LBA Formats: 1 00:46:52.968 Current LBA Format: LBA Format #00 00:46:52.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:46:52.968 00:46:52.968 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:46:52.968 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:52.968 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:53.227 rmmod nvme_tcp 00:46:53.227 rmmod nvme_fabrics 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:53.227 09:00:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:46:55.135 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:46:55.394 09:00:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:57.308 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:57.308 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:57.308 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:57.308 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:57.308 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:57.308 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:57.308 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:57.308 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:57.308 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:58.249 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:46:58.249 00:46:58.249 real 0m13.072s 00:46:58.249 user 0m3.007s 00:46:58.249 sys 0m5.880s 00:46:58.249 09:00:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:58.249 09:00:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:46:58.249 ************************************ 00:46:58.249 END TEST nvmf_identify_kernel_target 00:46:58.249 ************************************ 00:46:58.249 09:00:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:46:58.249 09:00:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:46:58.249 09:00:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:46:58.249 09:00:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:58.249 09:00:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:46:58.510 ************************************ 00:46:58.510 START TEST nvmf_auth_host 00:46:58.510 ************************************ 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:46:58.510 * Looking for test storage... 00:46:58.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:46:58.510 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:46:58.511 09:00:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:47:01.803 Found 0000:84:00.0 (0x8086 - 0x159b) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:47:01.803 Found 0000:84:00.1 (0x8086 - 0x159b) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:01.803 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:47:01.804 Found net devices under 0000:84:00.0: cvl_0_0 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:47:01.804 Found net devices under 0000:84:00.1: cvl_0_1 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:47:01.804 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:02.063 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:02.063 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:02.063 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:47:02.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:02.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:47:02.064 00:47:02.064 --- 10.0.0.2 ping statistics --- 00:47:02.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:02.064 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:02.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:02.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:47:02.064 00:47:02.064 --- 10.0.0.1 ping statistics --- 00:47:02.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:02.064 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2526081 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2526081 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2526081 ']' 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:02.064 09:00:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b03a394bd4e3a8b20051280ad6e27f5b 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vCN 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b03a394bd4e3a8b20051280ad6e27f5b 0 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b03a394bd4e3a8b20051280ad6e27f5b 0 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b03a394bd4e3a8b20051280ad6e27f5b 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:03.971 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vCN 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vCN 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.vCN 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=84273c2666d6d8043e240103a90a4100cd9656e4843b9f2331be9fab5d41ec01 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Qk3 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 84273c2666d6d8043e240103a90a4100cd9656e4843b9f2331be9fab5d41ec01 3 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 84273c2666d6d8043e240103a90a4100cd9656e4843b9f2331be9fab5d41ec01 3 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=84273c2666d6d8043e240103a90a4100cd9656e4843b9f2331be9fab5d41ec01 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:47:03.972 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Qk3 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Qk3 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Qk3 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=20f3e678314bcc5b0ea69e38e7f5e286c756991a797eeb7c 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wXF 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 20f3e678314bcc5b0ea69e38e7f5e286c756991a797eeb7c 0 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 20f3e678314bcc5b0ea69e38e7f5e286c756991a797eeb7c 0 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=20f3e678314bcc5b0ea69e38e7f5e286c756991a797eeb7c 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wXF 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wXF 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wXF 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c481701dcbbc3fe8873980d7d64277d5380500577916022 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Htu 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c481701dcbbc3fe8873980d7d64277d5380500577916022 2 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c481701dcbbc3fe8873980d7d64277d5380500577916022 2 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c481701dcbbc3fe8873980d7d64277d5380500577916022 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Htu 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Htu 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Htu 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:47:04.232 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:04.233 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:04.233 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:04.233 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:47:04.233 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:47:04.233 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:47:04.233 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=084b63c65bb3e69a144177021cd1502e 00:47:04.233 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Jbm 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 084b63c65bb3e69a144177021cd1502e 1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 084b63c65bb3e69a144177021cd1502e 1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=084b63c65bb3e69a144177021cd1502e 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Jbm 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Jbm 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Jbm 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd69aa46ed66902e6c33d4dd4c757e01 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RDQ 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd69aa46ed66902e6c33d4dd4c757e01 1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd69aa46ed66902e6c33d4dd4c757e01 1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd69aa46ed66902e6c33d4dd4c757e01 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RDQ 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RDQ 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.RDQ 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d57bfe87d60ead94c0345cb41a7f86375e2584a38b839e22 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NOT 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d57bfe87d60ead94c0345cb41a7f86375e2584a38b839e22 2 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d57bfe87d60ead94c0345cb41a7f86375e2584a38b839e22 2 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d57bfe87d60ead94c0345cb41a7f86375e2584a38b839e22 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:47:04.493 09:00:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NOT 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NOT 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.NOT 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=566a11107e55498e9138b745fb4a00a6 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Btx 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 566a11107e55498e9138b745fb4a00a6 0 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 566a11107e55498e9138b745fb4a00a6 0 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=566a11107e55498e9138b745fb4a00a6 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Btx 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Btx 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Btx 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f707f107a1afdd35fa8b90843999037df6f52b63c2b70c2612e105d8ead7e862 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0KM 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f707f107a1afdd35fa8b90843999037df6f52b63c2b70c2612e105d8ead7e862 3 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f707f107a1afdd35fa8b90843999037df6f52b63c2b70c2612e105d8ead7e862 3 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f707f107a1afdd35fa8b90843999037df6f52b63c2b70c2612e105d8ead7e862 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:47:04.753 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0KM 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0KM 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.0KM 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2526081 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2526081 ']' 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:05.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:05.013 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vCN 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Qk3 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qk3 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wXF 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Htu ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Htu 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Jbm 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.RDQ ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RDQ 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.NOT 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Btx ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Btx 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0KM 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:05.581 09:00:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:07.523 Waiting for block devices as requested 00:47:07.523 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:47:07.523 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:47:07.783 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:47:07.783 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:47:08.041 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:47:08.041 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:47:08.041 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:47:08.300 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:47:08.300 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:47:08.300 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:47:08.559 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:47:08.559 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:47:08.559 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:47:08.819 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:47:08.819 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:47:08.819 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:47:09.078 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:47:09.658 09:00:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:47:09.658 No valid GPT data, bailing 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:47:09.658 00:47:09.658 Discovery Log Number of Records 2, Generation counter 2 00:47:09.658 =====Discovery Log Entry 0====== 00:47:09.658 trtype: tcp 00:47:09.658 adrfam: ipv4 00:47:09.658 subtype: current discovery subsystem 00:47:09.658 treq: not specified, sq flow control disable supported 00:47:09.658 portid: 1 00:47:09.658 trsvcid: 4420 00:47:09.658 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:09.658 traddr: 10.0.0.1 00:47:09.658 eflags: none 00:47:09.658 sectype: none 00:47:09.658 =====Discovery Log Entry 1====== 00:47:09.658 trtype: tcp 00:47:09.658 adrfam: ipv4 00:47:09.658 subtype: nvme subsystem 00:47:09.658 treq: not specified, sq flow control disable supported 00:47:09.658 portid: 1 00:47:09.658 trsvcid: 4420 00:47:09.658 subnqn: nqn.2024-02.io.spdk:cnode0 00:47:09.658 traddr: 10.0.0.1 00:47:09.658 eflags: none 00:47:09.658 sectype: none 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:09.658 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:09.918 nvme0n1 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:09.918 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.177 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.178 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.437 nvme0n1 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.437 09:00:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.697 nvme0n1 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.697 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.963 nvme0n1 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.963 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.227 nvme0n1 00:47:11.227 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.227 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:11.227 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.227 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.227 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:11.227 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.488 09:00:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.488 nvme0n1 00:47:11.488 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.488 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:11.488 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.488 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.488 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.748 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.007 nvme0n1 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:12.007 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.008 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.273 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.534 nvme0n1 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.534 09:00:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.793 nvme0n1 00:47:12.793 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.794 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.054 nvme0n1 00:47:13.054 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.054 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:13.054 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:13.054 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.054 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.054 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.314 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.574 nvme0n1 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.574 09:00:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.574 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.151 nvme0n1 00:47:14.151 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.152 09:00:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.721 nvme0n1 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.721 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:15.288 nvme0n1 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.288 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:15.547 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.548 09:00:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:15.807 nvme0n1 00:47:15.807 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.807 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:15.807 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:15.807 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.807 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:16.065 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.065 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:16.065 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:16.065 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.066 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:16.631 nvme0n1 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.631 09:00:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:16.631 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.632 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:17.566 nvme0n1 00:47:17.566 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.566 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:17.566 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.566 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:17.566 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:17.566 09:00:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.566 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:17.566 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:17.566 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.566 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.825 09:00:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:18.773 nvme0n1 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:18.773 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.774 09:00:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:19.711 nvme0n1 00:47:19.711 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:19.711 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:19.711 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:19.711 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:19.711 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:19.711 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:19.974 09:00:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:20.940 nvme0n1 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:20.940 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:20.941 09:00:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:21.884 nvme0n1 00:47:21.884 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:21.884 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:21.884 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:21.884 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:21.884 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:21.884 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:22.143 09:00:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:24.052 nvme0n1 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:24.052 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:24.053 09:00:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:25.962 nvme0n1 00:47:25.962 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:25.962 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:25.962 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:25.962 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:25.962 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:25.962 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:25.963 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:26.223 09:00:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:28.131 nvme0n1 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:28.131 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.132 09:00:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:30.042 nvme0n1 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:30.042 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:30.043 09:00:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:31.952 nvme0n1 00:47:31.952 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:31.952 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:31.952 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:31.952 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:31.952 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:31.952 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:32.212 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:32.213 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:32.213 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:32.213 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:32.213 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.213 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.474 nvme0n1 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.474 09:00:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.734 nvme0n1 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.735 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.995 nvme0n1 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.995 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.255 nvme0n1 00:47:33.255 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.255 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:33.255 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.255 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:33.255 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.255 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.255 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.256 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.515 nvme0n1 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.515 09:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.515 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:33.515 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:33.515 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.515 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:33.775 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.776 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.036 nvme0n1 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.036 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.296 nvme0n1 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.296 09:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.556 nvme0n1 00:47:34.556 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.556 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:34.556 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.556 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.556 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.816 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.077 nvme0n1 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.077 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.337 nvme0n1 00:47:35.337 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.337 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:35.337 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.337 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.337 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.598 09:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.174 nvme0n1 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.174 09:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.767 nvme0n1 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:36.767 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.768 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.337 nvme0n1 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:37.337 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.338 09:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.908 nvme0n1 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.908 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:38.481 nvme0n1 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:38.481 09:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:39.419 nvme0n1 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.419 09:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:40.801 nvme0n1 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.801 09:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:41.738 nvme0n1 00:47:41.738 09:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.738 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:41.738 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.738 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:41.738 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:41.738 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.738 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:41.738 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.739 09:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:42.676 nvme0n1 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:42.676 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:42.936 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:42.936 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:42.936 09:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:43.875 nvme0n1 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.875 09:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:45.783 nvme0n1 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:45.783 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.043 09:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:47.953 nvme0n1 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:47.953 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:47.954 09:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:49.862 nvme0n1 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:49.862 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:49.863 09:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:52.405 nvme0n1 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:52.405 09:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:53.816 nvme0n1 00:47:53.816 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:53.816 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:53.816 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:53.816 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:53.816 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:53.816 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.076 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.337 nvme0n1 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.337 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.598 nvme0n1 00:47:54.598 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.598 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:54.598 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.598 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.598 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:54.598 09:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.598 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.859 nvme0n1 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:54.859 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:54.860 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:54.860 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:54.860 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:54.860 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:54.860 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:54.860 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.120 nvme0n1 00:47:55.120 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.120 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:55.120 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.120 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.120 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:55.120 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.380 nvme0n1 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.380 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:55.381 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.381 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.381 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:55.641 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.641 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:55.641 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:55.641 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.641 09:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.641 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.901 nvme0n1 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:55.901 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:55.902 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.161 nvme0n1 00:47:56.161 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.161 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:56.161 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.161 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:56.161 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.161 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.420 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:56.420 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:56.420 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.420 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.421 09:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.680 nvme0n1 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.680 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:56.681 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.251 nvme0n1 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.251 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.510 nvme0n1 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:47:57.510 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:57.511 09:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.078 nvme0n1 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.078 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.647 nvme0n1 00:47:58.647 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.647 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:58.647 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.647 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.647 09:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:58.647 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.216 nvme0n1 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:59.216 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:59.217 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:59.217 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:47:59.217 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.217 09:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.782 nvme0n1 00:47:59.782 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.782 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:47:59.782 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:47:59.782 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:59.783 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:00.350 nvme0n1 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:00.350 09:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:01.288 nvme0n1 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:48:01.288 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:01.289 09:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:02.223 nvme0n1 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:02.223 09:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:03.161 nvme0n1 00:48:03.161 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:03.161 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:03.162 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:03.421 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:03.422 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:48:03.422 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:03.422 09:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:04.359 nvme0n1 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:04.359 09:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:05.299 nvme0n1 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAzYTM5NGJkNGUzYThiMjAwNTEyODBhZDZlMjdmNWKwrh68: 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: ]] 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODQyNzNjMjY2NmQ2ZDgwNDNlMjQwMTAzYTkwYTQxMDBjZDk2NTZlNDg0M2I5ZjIzMzFiZTlmYWI1ZDQxZWMwMcM2ZQs=: 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:48:05.299 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:05.560 09:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:07.478 nvme0n1 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:07.478 09:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:09.397 nvme0n1 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:48:09.397 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDg0YjYzYzY1YmIzZTY5YTE0NDE3NzAyMWNkMTUwMmWyfE7J: 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: ]] 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGQ2OWFhNDZlZDY2OTAyZTZjMzNkNGRkNGM3NTdlMDEnMsYv: 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:09.398 09:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:11.307 nvme0n1 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:11.307 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDU3YmZlODdkNjBlYWQ5NGMwMzQ1Y2I0MWE3Zjg2Mzc1ZTI1ODRhMzhiODM5ZTIyKXjOQg==: 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: ]] 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTY2YTExMTA3ZTU1NDk4ZTkxMzhiNzQ1ZmI0YTAwYTaovE7O: 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:11.308 09:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:13.241 nvme0n1 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjcwN2YxMDdhMWFmZGQzNWZhOGI5MDg0Mzk5OTAzN2RmNmY1MmI2M2MyYjcwYzI2MTJlMTA1ZDhlYWQ3ZTg2MlBnbGI=: 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:13.241 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:48:13.501 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:13.502 09:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.413 nvme0n1 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.413 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmM2U2NzgzMTRiY2M1YjBlYTY5ZTM4ZTdmNWUyODZjNzU2OTkxYTc5N2VlYjdjaL+h9Q==: 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0ODE3MDFkY2JiYzNmZTg4NzM5ODBkN2Q2NDI3N2Q1MzgwNTAwNTc3OTE2MDIyqltc0A==: 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.414 request: 00:48:15.414 { 00:48:15.414 "name": "nvme0", 00:48:15.414 "trtype": "tcp", 00:48:15.414 "traddr": "10.0.0.1", 00:48:15.414 "adrfam": "ipv4", 00:48:15.414 "trsvcid": "4420", 00:48:15.414 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:48:15.414 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:48:15.414 "prchk_reftag": false, 00:48:15.414 "prchk_guard": false, 00:48:15.414 "hdgst": false, 00:48:15.414 "ddgst": false, 00:48:15.414 "method": "bdev_nvme_attach_controller", 00:48:15.414 "req_id": 1 00:48:15.414 } 00:48:15.414 Got JSON-RPC error response 00:48:15.414 response: 00:48:15.414 { 00:48:15.414 "code": -5, 00:48:15.414 "message": "Input/output error" 00:48:15.414 } 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:48:15.414 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.675 09:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.675 request: 00:48:15.675 { 00:48:15.675 "name": "nvme0", 00:48:15.675 "trtype": "tcp", 00:48:15.675 "traddr": "10.0.0.1", 00:48:15.675 "adrfam": "ipv4", 00:48:15.675 "trsvcid": "4420", 00:48:15.675 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:48:15.675 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:48:15.675 "prchk_reftag": false, 00:48:15.675 "prchk_guard": false, 00:48:15.675 "hdgst": false, 00:48:15.675 "ddgst": false, 00:48:15.675 "dhchap_key": "key2", 00:48:15.675 "method": "bdev_nvme_attach_controller", 00:48:15.675 "req_id": 1 00:48:15.675 } 00:48:15.675 Got JSON-RPC error response 00:48:15.675 response: 00:48:15.675 { 00:48:15.675 "code": -5, 00:48:15.675 "message": "Input/output error" 00:48:15.675 } 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:48:15.675 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:15.936 request: 00:48:15.936 { 00:48:15.936 "name": "nvme0", 00:48:15.936 "trtype": "tcp", 00:48:15.936 "traddr": "10.0.0.1", 00:48:15.936 "adrfam": "ipv4", 00:48:15.936 "trsvcid": "4420", 00:48:15.936 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:48:15.936 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:48:15.936 "prchk_reftag": false, 00:48:15.936 "prchk_guard": false, 00:48:15.936 "hdgst": false, 00:48:15.936 "ddgst": false, 00:48:15.936 "dhchap_key": "key1", 00:48:15.936 "dhchap_ctrlr_key": "ckey2", 00:48:15.936 "method": "bdev_nvme_attach_controller", 00:48:15.936 "req_id": 1 00:48:15.936 } 00:48:15.936 Got JSON-RPC error response 00:48:15.936 response: 00:48:15.936 { 00:48:15.936 "code": -5, 00:48:15.936 "message": "Input/output error" 00:48:15.936 } 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:48:15.936 rmmod nvme_tcp 00:48:15.936 rmmod nvme_fabrics 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2526081 ']' 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2526081 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2526081 ']' 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2526081 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526081 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526081' 00:48:15.936 killing process with pid 2526081 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2526081 00:48:15.936 09:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2526081 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:18.479 09:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:48:20.389 09:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:48:22.312 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:48:22.312 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:48:22.312 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:48:22.312 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:48:22.312 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:48:22.312 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:48:22.312 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:48:22.312 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:48:22.312 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:48:23.252 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:48:23.252 09:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vCN /tmp/spdk.key-null.wXF /tmp/spdk.key-sha256.Jbm /tmp/spdk.key-sha384.NOT /tmp/spdk.key-sha512.0KM /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:48:23.252 09:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:48:25.163 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:48:25.163 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:48:25.163 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:48:25.163 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:48:25.163 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:48:25.163 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:48:25.163 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:48:25.163 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:48:25.163 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:48:25.163 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:48:25.163 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:48:25.163 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:48:25.163 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:48:25.163 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:48:25.163 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:48:25.163 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:48:25.163 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:48:25.163 00:48:25.163 real 1m26.881s 00:48:25.163 user 1m25.532s 00:48:25.163 sys 0m10.757s 00:48:25.163 09:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:25.163 09:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:48:25.163 ************************************ 00:48:25.163 END TEST nvmf_auth_host 00:48:25.163 ************************************ 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:48:25.424 ************************************ 00:48:25.424 START TEST nvmf_digest 00:48:25.424 ************************************ 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:48:25.424 * Looking for test storage... 00:48:25.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:48:25.424 09:01:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:48:28.756 Found 0000:84:00.0 (0x8086 - 0x159b) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:48:28.756 Found 0000:84:00.1 (0x8086 - 0x159b) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:48:28.756 Found net devices under 0000:84:00.0: cvl_0_0 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:48:28.756 Found net devices under 0000:84:00.1: cvl_0_1 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:28.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:48:29.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:29.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:48:29.017 00:48:29.017 --- 10.0.0.2 ping statistics --- 00:48:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:29.017 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:29.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:29.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:48:29.017 00:48:29.017 --- 10.0.0.1 ping statistics --- 00:48:29.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:29.017 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:48:29.017 ************************************ 00:48:29.017 START TEST nvmf_digest_clean 00:48:29.017 ************************************ 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2539004 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2539004 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2539004 ']' 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:29.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:29.017 [2024-07-23 09:01:41.491571] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:48:29.017 [2024-07-23 09:01:41.491759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:29.277 EAL: No free 2048 kB hugepages reported on node 1 00:48:29.277 [2024-07-23 09:01:41.682011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:29.848 [2024-07-23 09:01:42.102158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:29.848 [2024-07-23 09:01:42.102299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:29.848 [2024-07-23 09:01:42.102379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:29.848 [2024-07-23 09:01:42.102434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:29.848 [2024-07-23 09:01:42.102484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:29.848 [2024-07-23 09:01:42.102592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:30.417 09:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:31.355 null0 00:48:31.355 [2024-07-23 09:01:43.548836] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:31.355 [2024-07-23 09:01:43.575068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2539280 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2539280 /var/tmp/bperf.sock 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2539280 ']' 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:31.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:31.355 09:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:31.355 [2024-07-23 09:01:43.687738] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:48:31.355 [2024-07-23 09:01:43.687924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539280 ] 00:48:31.355 EAL: No free 2048 kB hugepages reported on node 1 00:48:31.355 [2024-07-23 09:01:43.864532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:31.924 [2024-07-23 09:01:44.177433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:32.865 09:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:32.865 09:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:48:32.865 09:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:48:32.865 09:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:48:32.865 09:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:33.804 09:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:33.804 09:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:34.372 nvme0n1 00:48:34.372 09:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:48:34.372 09:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:34.632 Running I/O for 2 seconds... 00:48:37.171 00:48:37.171 Latency(us) 00:48:37.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:37.171 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:48:37.171 nvme0n1 : 2.00 10965.88 42.84 0.00 0.00 11654.83 5072.97 24078.41 00:48:37.171 =================================================================================================================== 00:48:37.171 Total : 10965.88 42.84 0.00 0.00 11654.83 5072.97 24078.41 00:48:37.171 0 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:48:37.171 | select(.opcode=="crc32c") 00:48:37.171 | "\(.module_name) \(.executed)"' 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2539280 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2539280 ']' 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2539280 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2539280 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2539280' 00:48:37.171 killing process with pid 2539280 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2539280 00:48:37.171 Received shutdown signal, test time was about 2.000000 seconds 00:48:37.171 00:48:37.171 Latency(us) 00:48:37.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:37.171 =================================================================================================================== 00:48:37.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:37.171 09:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2539280 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2540073 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2540073 /var/tmp/bperf.sock 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2540073 ']' 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:38.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:38.554 09:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:38.554 [2024-07-23 09:01:51.069671] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:48:38.554 [2024-07-23 09:01:51.070019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540073 ] 00:48:38.554 I/O size of 131072 is greater than zero copy threshold (65536). 00:48:38.554 Zero copy mechanism will not be used. 00:48:38.814 EAL: No free 2048 kB hugepages reported on node 1 00:48:38.814 [2024-07-23 09:01:51.329669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:39.385 [2024-07-23 09:01:51.643533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:40.325 09:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:40.325 09:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:48:40.325 09:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:48:40.325 09:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:48:40.325 09:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:41.271 09:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:41.271 09:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:41.841 nvme0n1 00:48:41.841 09:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:48:41.841 09:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:42.100 I/O size of 131072 is greater than zero copy threshold (65536). 00:48:42.100 Zero copy mechanism will not be used. 00:48:42.100 Running I/O for 2 seconds... 00:48:44.044 00:48:44.044 Latency(us) 00:48:44.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:44.044 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:48:44.044 nvme0n1 : 2.01 2955.96 369.49 0.00 0.00 5403.91 2075.31 11845.03 00:48:44.044 =================================================================================================================== 00:48:44.044 Total : 2955.96 369.49 0.00 0.00 5403.91 2075.31 11845.03 00:48:44.044 0 00:48:44.044 09:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:48:44.044 09:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:48:44.044 09:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:48:44.044 09:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:48:44.044 09:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:48:44.044 | select(.opcode=="crc32c") 00:48:44.044 | "\(.module_name) \(.executed)"' 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2540073 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2540073 ']' 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2540073 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2540073 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2540073' 00:48:44.613 killing process with pid 2540073 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2540073 00:48:44.613 Received shutdown signal, test time was about 2.000000 seconds 00:48:44.613 00:48:44.613 Latency(us) 00:48:44.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:44.613 =================================================================================================================== 00:48:44.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:44.613 09:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2540073 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2540880 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2540880 /var/tmp/bperf.sock 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2540880 ']' 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:45.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:45.994 09:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:46.255 [2024-07-23 09:01:58.625488] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:48:46.255 [2024-07-23 09:01:58.625835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540880 ] 00:48:46.516 EAL: No free 2048 kB hugepages reported on node 1 00:48:46.516 [2024-07-23 09:01:58.895894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:46.774 [2024-07-23 09:01:59.207741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:47.342 09:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:47.342 09:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:48:47.342 09:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:48:47.342 09:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:48:47.342 09:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:48.282 09:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:48.282 09:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:48.541 nvme0n1 00:48:48.541 09:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:48:48.541 09:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:48.541 Running I/O for 2 seconds... 00:48:51.081 00:48:51.081 Latency(us) 00:48:51.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:51.081 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:48:51.081 nvme0n1 : 2.01 13084.57 51.11 0.00 0.00 9759.45 4757.43 21748.24 00:48:51.081 =================================================================================================================== 00:48:51.081 Total : 13084.57 51.11 0.00 0.00 9759.45 4757.43 21748.24 00:48:51.081 0 00:48:51.081 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:48:51.081 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:48:51.081 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:48:51.081 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:48:51.081 | select(.opcode=="crc32c") 00:48:51.081 | "\(.module_name) \(.executed)"' 00:48:51.081 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2540880 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2540880 ']' 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2540880 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2540880 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2540880' 00:48:51.340 killing process with pid 2540880 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2540880 00:48:51.340 Received shutdown signal, test time was about 2.000000 seconds 00:48:51.340 00:48:51.340 Latency(us) 00:48:51.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:51.340 =================================================================================================================== 00:48:51.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:51.340 09:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2540880 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2541547 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2541547 /var/tmp/bperf.sock 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2541547 ']' 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:52.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:52.721 09:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:48:52.721 [2024-07-23 09:02:05.141483] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:48:52.721 [2024-07-23 09:02:05.141667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541547 ] 00:48:52.721 I/O size of 131072 is greater than zero copy threshold (65536). 00:48:52.721 Zero copy mechanism will not be used. 00:48:52.721 EAL: No free 2048 kB hugepages reported on node 1 00:48:52.980 [2024-07-23 09:02:05.303676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:53.318 [2024-07-23 09:02:05.618268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:54.258 09:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:54.258 09:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:48:54.258 09:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:48:54.258 09:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:48:54.259 09:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:55.197 09:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:55.197 09:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:48:55.456 nvme0n1 00:48:55.456 09:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:48:55.456 09:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:55.716 I/O size of 131072 is greater than zero copy threshold (65536). 00:48:55.716 Zero copy mechanism will not be used. 00:48:55.716 Running I/O for 2 seconds... 00:48:57.626 00:48:57.627 Latency(us) 00:48:57.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:57.627 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:48:57.627 nvme0n1 : 2.00 3467.58 433.45 0.00 0.00 4598.62 4102.07 14272.28 00:48:57.627 =================================================================================================================== 00:48:57.627 Total : 3467.58 433.45 0.00 0.00 4598.62 4102.07 14272.28 00:48:57.627 0 00:48:57.627 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:48:57.627 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:48:57.627 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:48:57.627 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:48:57.627 | select(.opcode=="crc32c") 00:48:57.627 | "\(.module_name) \(.executed)"' 00:48:57.627 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2541547 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2541547 ']' 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2541547 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2541547 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2541547' 00:48:58.567 killing process with pid 2541547 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2541547 00:48:58.567 Received shutdown signal, test time was about 2.000000 seconds 00:48:58.567 00:48:58.567 Latency(us) 00:48:58.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:58.567 =================================================================================================================== 00:48:58.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:58.567 09:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2541547 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2539004 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2539004 ']' 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2539004 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2539004 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2539004' 00:48:59.964 killing process with pid 2539004 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2539004 00:48:59.964 09:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2539004 00:49:02.508 00:49:02.508 real 0m33.285s 00:49:02.508 user 1m6.500s 00:49:02.508 sys 0m6.717s 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:49:02.508 ************************************ 00:49:02.508 END TEST nvmf_digest_clean 00:49:02.508 ************************************ 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:49:02.508 ************************************ 00:49:02.508 START TEST nvmf_digest_error 00:49:02.508 ************************************ 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2542630 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2542630 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2542630 ']' 00:49:02.508 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:02.509 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:02.509 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:02.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:02.509 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:02.509 09:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:02.509 [2024-07-23 09:02:14.936346] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:02.509 [2024-07-23 09:02:14.936665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:02.769 EAL: No free 2048 kB hugepages reported on node 1 00:49:02.769 [2024-07-23 09:02:15.257867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:03.339 [2024-07-23 09:02:15.730757] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:03.339 [2024-07-23 09:02:15.730885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:03.339 [2024-07-23 09:02:15.730945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:03.339 [2024-07-23 09:02:15.730999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:03.339 [2024-07-23 09:02:15.731050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:03.339 [2024-07-23 09:02:15.731160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:49:03.909 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:03.909 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:49:03.909 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:03.909 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:49:03.909 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:04.170 [2024-07-23 09:02:16.454826] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.170 09:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:04.741 null0 00:49:04.741 [2024-07-23 09:02:17.159906] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:04.741 [2024-07-23 09:02:17.186088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2542911 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2542911 /var/tmp/bperf.sock 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2542911 ']' 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:04.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:04.741 09:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:05.002 [2024-07-23 09:02:17.354111] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:05.002 [2024-07-23 09:02:17.354412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542911 ] 00:49:05.002 EAL: No free 2048 kB hugepages reported on node 1 00:49:05.263 [2024-07-23 09:02:17.590276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:05.523 [2024-07-23 09:02:17.903834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:06.906 09:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:06.906 09:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:49:06.906 09:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:06.906 09:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:07.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:49:07.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:07.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:07.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:08.107 nvme0n1 00:49:08.107 09:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:49:08.107 09:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:08.107 09:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:08.107 09:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:08.107 09:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:49:08.107 09:02:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:08.107 Running I/O for 2 seconds... 00:49:08.107 [2024-07-23 09:02:20.521009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.107 [2024-07-23 09:02:20.521112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.107 [2024-07-23 09:02:20.521153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.107 [2024-07-23 09:02:20.549936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.107 [2024-07-23 09:02:20.550008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.107 [2024-07-23 09:02:20.550046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.107 [2024-07-23 09:02:20.567579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.107 [2024-07-23 09:02:20.567640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.107 [2024-07-23 09:02:20.567675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.107 [2024-07-23 09:02:20.594813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.107 [2024-07-23 09:02:20.594872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.107 [2024-07-23 09:02:20.594907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.107 [2024-07-23 09:02:20.620186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.107 [2024-07-23 09:02:20.620247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.107 [2024-07-23 09:02:20.620283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.642037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.642137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.667475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.667537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.667573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.690831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.690891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.690926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.715073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.715133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.715168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.733506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.733566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.733613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.760589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.760648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.760684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.786692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.786751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.786787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.809200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.809259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.809295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.833322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.833380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.833416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.853139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.853197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.853232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.368 [2024-07-23 09:02:20.878246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.368 [2024-07-23 09:02:20.878306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.368 [2024-07-23 09:02:20.878353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:20.900611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:20.900671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:20.900706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:20.925148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:20.925207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:20.925243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:20.943612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:20.943679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:20.943715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:20.967993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:20.968051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:20.968087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:20.992628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:20.992688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:20.992745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:21.016509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:21.016567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:21.016601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:21.037412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:21.037469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:21.037504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:21.058630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:21.058689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:21.058724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:21.078048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:21.078107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:21.078142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:21.102428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:21.102485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:21.102521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.628 [2024-07-23 09:02:21.127743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.628 [2024-07-23 09:02:21.127802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.628 [2024-07-23 09:02:21.127845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.151964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.152024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.152060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.176529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.176588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.176624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.195960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.196019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.196054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.224305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.224373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.224409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.246138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.246197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.246233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.268914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.268973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.269008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.287522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.287580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.287615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.313416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.313476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.313512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.334301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.334378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.334415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.359990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.360048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.360085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.385243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.385301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.385349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:08.889 [2024-07-23 09:02:21.404634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:08.889 [2024-07-23 09:02:21.404692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:08.889 [2024-07-23 09:02:21.404727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.430524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.430584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.430618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.458213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.458271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.458306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.479211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.479269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.479305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.504439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.504497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.504532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.523017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.523075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.523110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.550300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.550371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.550406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.577799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.577859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.577894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.596364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.596423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.596458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.621384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.621459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.621496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.644562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.644621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.644657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.151 [2024-07-23 09:02:21.665243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.151 [2024-07-23 09:02:21.665303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.151 [2024-07-23 09:02:21.665352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.684931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.684992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.685049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.708705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.708765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.708800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.733925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.733994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.734030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.759390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.759450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.759486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.778725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.778784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.778820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.806668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.806728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.806764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.832210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.832271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.832320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.851018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.851076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.851112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.879647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.879706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.879742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.902528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.902588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.902622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.413 [2024-07-23 09:02:21.922531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.413 [2024-07-23 09:02:21.922588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.413 [2024-07-23 09:02:21.922625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:21.945250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:21.945327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:21.945365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:21.967630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:21.967688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:21.967724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:21.988951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:21.989009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:21.989043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.009688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.009746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.009781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.034255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.034324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.034362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.058692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.058750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.058785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.080826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.080887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.080924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.103359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.103417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.103452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.122664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.122731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.122768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.146170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.146227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.146262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.165045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.165104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.165139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.674 [2024-07-23 09:02:22.188971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.674 [2024-07-23 09:02:22.189036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.674 [2024-07-23 09:02:22.189076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.935 [2024-07-23 09:02:22.213967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.935 [2024-07-23 09:02:22.214027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.935 [2024-07-23 09:02:22.214062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.935 [2024-07-23 09:02:22.237385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.935 [2024-07-23 09:02:22.237443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.935 [2024-07-23 09:02:22.237479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.935 [2024-07-23 09:02:22.259697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.935 [2024-07-23 09:02:22.259756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.935 [2024-07-23 09:02:22.259793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.935 [2024-07-23 09:02:22.286677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.935 [2024-07-23 09:02:22.286736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.935 [2024-07-23 09:02:22.286771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.935 [2024-07-23 09:02:22.303981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.935 [2024-07-23 09:02:22.304040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.935 [2024-07-23 09:02:22.304075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.935 [2024-07-23 09:02:22.330522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.935 [2024-07-23 09:02:22.330581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.935 [2024-07-23 09:02:22.330617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.936 [2024-07-23 09:02:22.356304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.936 [2024-07-23 09:02:22.356373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.936 [2024-07-23 09:02:22.356410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.936 [2024-07-23 09:02:22.377037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.936 [2024-07-23 09:02:22.377096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.936 [2024-07-23 09:02:22.377133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.936 [2024-07-23 09:02:22.402414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.936 [2024-07-23 09:02:22.402472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.936 [2024-07-23 09:02:22.402507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.936 [2024-07-23 09:02:22.427435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.936 [2024-07-23 09:02:22.427495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.936 [2024-07-23 09:02:22.427531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:09.936 [2024-07-23 09:02:22.452135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:09.936 [2024-07-23 09:02:22.452194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:09.936 [2024-07-23 09:02:22.452230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:10.197 [2024-07-23 09:02:22.472472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:10.197 [2024-07-23 09:02:22.472531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:10.197 [2024-07-23 09:02:22.472567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:10.197 [2024-07-23 09:02:22.494284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:10.197 [2024-07-23 09:02:22.494352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:10.197 [2024-07-23 09:02:22.494389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:10.197 00:49:10.197 Latency(us) 00:49:10.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:10.197 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:49:10.197 nvme0n1 : 2.01 10976.44 42.88 0.00 0.00 11639.57 5825.42 32428.18 00:49:10.197 =================================================================================================================== 00:49:10.197 Total : 10976.44 42.88 0.00 0.00 11639.57 5825.42 32428.18 00:49:10.197 0 00:49:10.197 09:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:49:10.197 09:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:49:10.197 09:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:49:10.197 | .driver_specific 00:49:10.197 | .nvme_error 00:49:10.197 | .status_code 00:49:10.197 | .command_transient_transport_error' 00:49:10.197 09:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 86 > 0 )) 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2542911 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2542911 ']' 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2542911 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2542911 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2542911' 00:49:10.769 killing process with pid 2542911 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2542911 00:49:10.769 Received shutdown signal, test time was about 2.000000 seconds 00:49:10.769 00:49:10.769 Latency(us) 00:49:10.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:10.769 =================================================================================================================== 00:49:10.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:10.769 09:02:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2542911 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2543715 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2543715 /var/tmp/bperf.sock 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2543715 ']' 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:12.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:12.153 09:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:12.153 [2024-07-23 09:02:24.605163] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:12.153 [2024-07-23 09:02:24.605523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543715 ] 00:49:12.153 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:12.153 Zero copy mechanism will not be used. 00:49:12.414 EAL: No free 2048 kB hugepages reported on node 1 00:49:12.414 [2024-07-23 09:02:24.867291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:12.674 [2024-07-23 09:02:25.179921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:14.069 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:14.069 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:14.070 09:02:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:15.011 nvme0n1 00:49:15.011 09:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:49:15.011 09:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:15.011 09:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:15.011 09:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:15.011 09:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:49:15.011 09:02:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:15.011 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:15.011 Zero copy mechanism will not be used. 00:49:15.011 Running I/O for 2 seconds... 00:49:15.011 [2024-07-23 09:02:27.511047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.011 [2024-07-23 09:02:27.511154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.011 [2024-07-23 09:02:27.511197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.011 [2024-07-23 09:02:27.522635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.011 [2024-07-23 09:02:27.522695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.011 [2024-07-23 09:02:27.522732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.272 [2024-07-23 09:02:27.534267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.272 [2024-07-23 09:02:27.534341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.272 [2024-07-23 09:02:27.534380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.272 [2024-07-23 09:02:27.547303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.272 [2024-07-23 09:02:27.547375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.272 [2024-07-23 09:02:27.547413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.272 [2024-07-23 09:02:27.558447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.272 [2024-07-23 09:02:27.558507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.272 [2024-07-23 09:02:27.558543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.272 [2024-07-23 09:02:27.569164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.569225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.569261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.578729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.578788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.578825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.589678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.589737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.589773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.600534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.600593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.600638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.612004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.612064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.612099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.623179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.623238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.623275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.635723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.635785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.635821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.647805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.647866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.647902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.660101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.660161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.660198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.671133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.671194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.671231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.684476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.684537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.684575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.697243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.697304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.697352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.709801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.709871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.709909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.717970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.718030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.718066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.727112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.727170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.727207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.739405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.739464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.739502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.751257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.751328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.751367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.762644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.762704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.762739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.774486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.774549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.774586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.273 [2024-07-23 09:02:27.785779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.273 [2024-07-23 09:02:27.785840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.273 [2024-07-23 09:02:27.785876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.534 [2024-07-23 09:02:27.796952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.534 [2024-07-23 09:02:27.797014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.534 [2024-07-23 09:02:27.797053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.534 [2024-07-23 09:02:27.808110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.534 [2024-07-23 09:02:27.808170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.534 [2024-07-23 09:02:27.808207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.534 [2024-07-23 09:02:27.817892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.534 [2024-07-23 09:02:27.817952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.534 [2024-07-23 09:02:27.817988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.825146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.825216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.825252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.835962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.836022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.836059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.847650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.847710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.847746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.860206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.860270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.860317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.872326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.872399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.872436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.883839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.883898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.883935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.895508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.895589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.895628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.907004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.907065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.907101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.918107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.918166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.918210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.929869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.929965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.940635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.940696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.940732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.953343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.953414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.953450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.964508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.964578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.964615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.975300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.975376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.975412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.985761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.985820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.985858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:27.996287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:27.996366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:27.996404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:28.005921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:28.005980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:28.006016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:28.016221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:28.016279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:28.016326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:28.026112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:28.026173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:28.026210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:28.036236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:28.036294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:28.036343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.535 [2024-07-23 09:02:28.046678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.535 [2024-07-23 09:02:28.046737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.535 [2024-07-23 09:02:28.046773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.796 [2024-07-23 09:02:28.057820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.796 [2024-07-23 09:02:28.057881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.796 [2024-07-23 09:02:28.057919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.796 [2024-07-23 09:02:28.068989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.796 [2024-07-23 09:02:28.069047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.796 [2024-07-23 09:02:28.069106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.796 [2024-07-23 09:02:28.080467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.796 [2024-07-23 09:02:28.080538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.796 [2024-07-23 09:02:28.080574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.796 [2024-07-23 09:02:28.091930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.796 [2024-07-23 09:02:28.091989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.092025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.102947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.103006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.103041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.113927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.113985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.114021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.120301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.120368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.120405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.130413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.130471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.130508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.141718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.141777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.141813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.152841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.152901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.152937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.164169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.164229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.164266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.176012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.176071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.176107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.187917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.187977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.188013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.199584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.199643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.199678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.210771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.210829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.210864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.222182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.222240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.222275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.233508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.233567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.233604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.244425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.244484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.244520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.255280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.255348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.255385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.266282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.266351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.266399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.277360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.277442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.277480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.287687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.287745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.287781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.298241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.298301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.298357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:15.797 [2024-07-23 09:02:28.309495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:15.797 [2024-07-23 09:02:28.309556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:15.797 [2024-07-23 09:02:28.309591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.320446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.320507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.320544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.331380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.331439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.331474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.342512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.342572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.342608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.353497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.353556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.353591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.364485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.364545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.364582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.375551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.375612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.375648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.386437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.386500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.386540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.397416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.397475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.397512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.408393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.408453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.408489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.419305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.419375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.419412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.430267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.430337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.430376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.441222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.441283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.441332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.452221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.452281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.452343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.463138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.463198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.463234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.057 [2024-07-23 09:02:28.474078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.057 [2024-07-23 09:02:28.474136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.057 [2024-07-23 09:02:28.474173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.484971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.485028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.485064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.495983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.496041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.496077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.506821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.506881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.506916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.517873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.517932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.517969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.528791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.528853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.528889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.539890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.539968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.540005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.550878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.550942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.550980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.563075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.563136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.563173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.058 [2024-07-23 09:02:28.571624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.058 [2024-07-23 09:02:28.571683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.058 [2024-07-23 09:02:28.571719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.580939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.580998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.581035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.590250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.590318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.590357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.599391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.599449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.599486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.611217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.611277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.611323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.622085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.622145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.622181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.633281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.633352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.633402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.644385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.644444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.644489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.655602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.655662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.655698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.666357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.666416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.666453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.676657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.676716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.676752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.687254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.687322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.687363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.697914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.319 [2024-07-23 09:02:28.697972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.319 [2024-07-23 09:02:28.698009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.319 [2024-07-23 09:02:28.708663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.708725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.708763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.719560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.719620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.719657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.730548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.730607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.730643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.741453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.741511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.741548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.752425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.752483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.752520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.763329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.763386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.763422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.774236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.774295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.774345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.785108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.785169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.785206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.796144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.796203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.796240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.807154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.807214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.807250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.818211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.818286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.320 [2024-07-23 09:02:28.829121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.320 [2024-07-23 09:02:28.829180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.320 [2024-07-23 09:02:28.829216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.581 [2024-07-23 09:02:28.840071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.581 [2024-07-23 09:02:28.840132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.581 [2024-07-23 09:02:28.840168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.581 [2024-07-23 09:02:28.851013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.581 [2024-07-23 09:02:28.851073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.851108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.861893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.861952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.861987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.872784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.872844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.872879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.883357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.883416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.883452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.893398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.893459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.893495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.903730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.903788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.903825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.914264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.914353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.914391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.924686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.924744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.924781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.935124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.935183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.935219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.946226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.946284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.946335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.957204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.957262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.957298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.968153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.968210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.968246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.979242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.979300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.979351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:28.990248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:28.990305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:28.990358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.001191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.001250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.001296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.012241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.012299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.012349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.023100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.023158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.023195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.034033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.034093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.034130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.044981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.045044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.045080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.056637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.056697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.056733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.067607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.067668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.067705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.078578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.078638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.078673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.089669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.089734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.089772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.582 [2024-07-23 09:02:29.100357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.582 [2024-07-23 09:02:29.100427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.582 [2024-07-23 09:02:29.100465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.110785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.110845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.110882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.120252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.120322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.120373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.130635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.130694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.130730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.141079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.141138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.141174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.151188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.151248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.151283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.161595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.161665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.161701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.172334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.172393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.172429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.182951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.183011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.183057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.193545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.193605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.193646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.204733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.204793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.204830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.215205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.215265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.215301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.225756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.225816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.225853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.236725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.236786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.236823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.247842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.247900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.844 [2024-07-23 09:02:29.247937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.844 [2024-07-23 09:02:29.258862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.844 [2024-07-23 09:02:29.258922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.269809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.269868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.269904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.280774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.280843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.280881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.291570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.291629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.291666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.302531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.302593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.302630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.313518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.313577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.313613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.324532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.324590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.324626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.335478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.335537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.335573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.346464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.346524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.346560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:16.845 [2024-07-23 09:02:29.357451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:16.845 [2024-07-23 09:02:29.357509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:16.845 [2024-07-23 09:02:29.357545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:17.106 [2024-07-23 09:02:29.368577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.106 [2024-07-23 09:02:29.368639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.106 [2024-07-23 09:02:29.368690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:17.106 [2024-07-23 09:02:29.379562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.106 [2024-07-23 09:02:29.379621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.106 [2024-07-23 09:02:29.379658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:17.106 [2024-07-23 09:02:29.390515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.106 [2024-07-23 09:02:29.390573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.106 [2024-07-23 09:02:29.390608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:17.106 [2024-07-23 09:02:29.401618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.106 [2024-07-23 09:02:29.401676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.106 [2024-07-23 09:02:29.401713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:17.106 [2024-07-23 09:02:29.412696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.106 [2024-07-23 09:02:29.412755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.106 [2024-07-23 09:02:29.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:17.106 [2024-07-23 09:02:29.423689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.106 [2024-07-23 09:02:29.423748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.106 [2024-07-23 09:02:29.423784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:17.106 [2024-07-23 09:02:29.434700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.106 [2024-07-23 09:02:29.434758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.106 [2024-07-23 09:02:29.434793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:17.107 [2024-07-23 09:02:29.445712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.107 [2024-07-23 09:02:29.445770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.107 [2024-07-23 09:02:29.445806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:17.107 [2024-07-23 09:02:29.456705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.107 [2024-07-23 09:02:29.456764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.107 [2024-07-23 09:02:29.456823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:17.107 [2024-07-23 09:02:29.468003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.107 [2024-07-23 09:02:29.468070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.107 [2024-07-23 09:02:29.468107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:17.107 [2024-07-23 09:02:29.479278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.107 [2024-07-23 09:02:29.479349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.107 [2024-07-23 09:02:29.479386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:17.107 [2024-07-23 09:02:29.490243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.107 [2024-07-23 09:02:29.490301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.107 [2024-07-23 09:02:29.490349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:17.107 [2024-07-23 09:02:29.501378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f7a00) 00:49:17.107 [2024-07-23 09:02:29.501435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.107 [2024-07-23 09:02:29.501471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:17.107 00:49:17.107 Latency(us) 00:49:17.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:17.107 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:49:17.107 nvme0n1 : 2.01 2836.92 354.62 0.00 0.00 5629.85 1250.04 13398.47 00:49:17.107 =================================================================================================================== 00:49:17.107 Total : 2836.92 354.62 0.00 0.00 5629.85 1250.04 13398.47 00:49:17.107 0 00:49:17.107 09:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:49:17.107 09:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:49:17.107 09:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:49:17.107 | .driver_specific 00:49:17.107 | .nvme_error 00:49:17.107 | .status_code 00:49:17.107 | .command_transient_transport_error' 00:49:17.107 09:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2543715 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2543715 ']' 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2543715 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2543715 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2543715' 00:49:17.676 killing process with pid 2543715 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2543715 00:49:17.676 Received shutdown signal, test time was about 2.000000 seconds 00:49:17.676 00:49:17.676 Latency(us) 00:49:17.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:17.676 =================================================================================================================== 00:49:17.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:17.676 09:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2543715 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2544398 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2544398 /var/tmp/bperf.sock 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2544398 ']' 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:19.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:19.060 09:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:19.320 [2024-07-23 09:02:31.620842] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:19.320 [2024-07-23 09:02:31.621150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544398 ] 00:49:19.320 EAL: No free 2048 kB hugepages reported on node 1 00:49:19.579 [2024-07-23 09:02:31.856714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:19.839 [2024-07-23 09:02:32.168712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:20.779 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:20.779 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:49:20.779 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:20.779 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:21.349 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:49:21.349 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:21.349 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:21.349 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:21.349 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:21.349 09:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:21.919 nvme0n1 00:49:21.919 09:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:49:21.919 09:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:21.919 09:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:21.919 09:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:21.919 09:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:49:21.919 09:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:22.180 Running I/O for 2 seconds... 00:49:22.440 [2024-07-23 09:02:34.714800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:49:22.440 [2024-07-23 09:02:34.716438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.716505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.733575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:49:22.440 [2024-07-23 09:02:34.735121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.735177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.755571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:49:22.440 [2024-07-23 09:02:34.757441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.757495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.775980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:49:22.440 [2024-07-23 09:02:34.778084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.778140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.794739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:49:22.440 [2024-07-23 09:02:34.796840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.796894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.813153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7c50 00:49:22.440 [2024-07-23 09:02:34.814430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.814484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.832969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1ca0 00:49:22.440 [2024-07-23 09:02:34.834042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.834097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.855449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:49:22.440 [2024-07-23 09:02:34.858029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.858082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.873595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0bc0 00:49:22.440 [2024-07-23 09:02:34.875373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.875426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.893384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:49:22.440 [2024-07-23 09:02:34.894980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.895034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.915690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:49:22.440 [2024-07-23 09:02:34.918791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.918845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.929464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:49:22.440 [2024-07-23 09:02:34.930731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.930784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:49:22.440 [2024-07-23 09:02:34.947822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:49:22.440 [2024-07-23 09:02:34.949083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.440 [2024-07-23 09:02:34.949135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:34.969794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:49:22.701 [2024-07-23 09:02:34.971355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:34.971417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:34.990056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:49:22.701 [2024-07-23 09:02:34.991875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:34.991929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.008581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:49:22.701 [2024-07-23 09:02:35.010359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.010413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.030282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:49:22.701 [2024-07-23 09:02:35.032370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.032424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.050304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:49:22.701 [2024-07-23 09:02:35.052648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.052702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.068754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:49:22.701 [2024-07-23 09:02:35.071078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.071132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.086956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:49:22.701 [2024-07-23 09:02:35.088498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.088551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.106949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb760 00:49:22.701 [2024-07-23 09:02:35.108253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.108307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.129779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:49:22.701 [2024-07-23 09:02:35.132670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.132724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.148069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:49:22.701 [2024-07-23 09:02:35.150166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.150219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.166148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:49:22.701 [2024-07-23 09:02:35.169276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.169340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.184603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2510 00:49:22.701 [2024-07-23 09:02:35.185911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.185963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:49:22.701 [2024-07-23 09:02:35.205054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f46d0 00:49:22.701 [2024-07-23 09:02:35.206632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.701 [2024-07-23 09:02:35.206684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.223868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:49:22.962 [2024-07-23 09:02:35.225418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.225473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.246218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:49:22.962 [2024-07-23 09:02:35.248088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.248144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.266837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:49:22.962 [2024-07-23 09:02:35.268959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.269013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.285548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed0b0 00:49:22.962 [2024-07-23 09:02:35.288702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.288756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.304102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:49:22.962 [2024-07-23 09:02:35.305393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.305446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.324756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:49:22.962 [2024-07-23 09:02:35.326306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.326367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.343626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2d80 00:49:22.962 [2024-07-23 09:02:35.345141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.345195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.364717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e73e0 00:49:22.962 [2024-07-23 09:02:35.366523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.366576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.387071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fdeb0 00:49:22.962 [2024-07-23 09:02:35.389161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.389216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.407731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:49:22.962 [2024-07-23 09:02:35.410112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.410167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.426491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6b70 00:49:22.962 [2024-07-23 09:02:35.429859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.429912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.445209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe720 00:49:22.962 [2024-07-23 09:02:35.446763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.446816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:22.962 [2024-07-23 09:02:35.465707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fbcf0 00:49:22.962 [2024-07-23 09:02:35.467467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:22.962 [2024-07-23 09:02:35.467520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:23.229 [2024-07-23 09:02:35.484666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e95a0 00:49:23.229 [2024-07-23 09:02:35.486426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.229 [2024-07-23 09:02:35.486495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:49:23.229 [2024-07-23 09:02:35.507138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:49:23.229 [2024-07-23 09:02:35.509246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.229 [2024-07-23 09:02:35.509301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.527807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0ea0 00:49:23.230 [2024-07-23 09:02:35.530142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.530196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.546696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc560 00:49:23.230 [2024-07-23 09:02:35.549028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.549083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.565125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:49:23.230 [2024-07-23 09:02:35.566654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.566707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.585304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:49:23.230 [2024-07-23 09:02:35.586617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.586671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.608217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eea00 00:49:23.230 [2024-07-23 09:02:35.611085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.611140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.626650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:49:23.230 [2024-07-23 09:02:35.628711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.628764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.644747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:49:23.230 [2024-07-23 09:02:35.647906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.647959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.663206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:49:23.230 [2024-07-23 09:02:35.664480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.664534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.683659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:49:23.230 [2024-07-23 09:02:35.685191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.685243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.702443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:49:23.230 [2024-07-23 09:02:35.703946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.703998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.724498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:49:23.230 [2024-07-23 09:02:35.726303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.726368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:23.230 [2024-07-23 09:02:35.744973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:49:23.230 [2024-07-23 09:02:35.747045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.230 [2024-07-23 09:02:35.747098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:23.492 [2024-07-23 09:02:35.763838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:49:23.492 [2024-07-23 09:02:35.765918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.492 [2024-07-23 09:02:35.765972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:49:23.492 [2024-07-23 09:02:35.785989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:49:23.492 [2024-07-23 09:02:35.788361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.492 [2024-07-23 09:02:35.788415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:23.492 [2024-07-23 09:02:35.806514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:49:23.492 [2024-07-23 09:02:35.809126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.492 [2024-07-23 09:02:35.809180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:23.492 [2024-07-23 09:02:35.825325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:49:23.492 [2024-07-23 09:02:35.827918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.492 [2024-07-23 09:02:35.827982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:49:23.492 [2024-07-23 09:02:35.844481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dece0 00:49:23.492 [2024-07-23 09:02:35.846269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.492 [2024-07-23 09:02:35.846332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:23.492 [2024-07-23 09:02:35.864668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:49:23.492 [2024-07-23 09:02:35.866249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.492 [2024-07-23 09:02:35.866302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:23.492 [2024-07-23 09:02:35.887577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ff3c8 00:49:23.493 [2024-07-23 09:02:35.890685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.493 [2024-07-23 09:02:35.890738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:23.493 [2024-07-23 09:02:35.901652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e49b0 00:49:23.493 [2024-07-23 09:02:35.902905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.493 [2024-07-23 09:02:35.902956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:49:23.493 [2024-07-23 09:02:35.924066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:49:23.493 [2024-07-23 09:02:35.927445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.493 [2024-07-23 09:02:35.927497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:23.493 [2024-07-23 09:02:35.942420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:49:23.493 [2024-07-23 09:02:35.943922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.493 [2024-07-23 09:02:35.943973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:23.493 [2024-07-23 09:02:35.962867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:49:23.493 [2024-07-23 09:02:35.964936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.493 [2024-07-23 09:02:35.964991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:23.493 [2024-07-23 09:02:35.982258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:49:23.493 [2024-07-23 09:02:35.984057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.493 [2024-07-23 09:02:35.984111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:49:23.493 [2024-07-23 09:02:36.004513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd640 00:49:23.493 [2024-07-23 09:02:36.006576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.493 [2024-07-23 09:02:36.006630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:23.753 [2024-07-23 09:02:36.024898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:49:23.753 [2024-07-23 09:02:36.027198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.753 [2024-07-23 09:02:36.027251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:23.753 [2024-07-23 09:02:36.044559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed920 00:49:23.753 [2024-07-23 09:02:36.047135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.753 [2024-07-23 09:02:36.047187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:49:23.753 [2024-07-23 09:02:36.063057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:49:23.753 [2024-07-23 09:02:36.064853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.753 [2024-07-23 09:02:36.064905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:23.753 [2024-07-23 09:02:36.083445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:49:23.753 [2024-07-23 09:02:36.085024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.753 [2024-07-23 09:02:36.085077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:23.753 [2024-07-23 09:02:36.106543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:49:23.753 [2024-07-23 09:02:36.109677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.753 [2024-07-23 09:02:36.109731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:23.753 [2024-07-23 09:02:36.120662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1f80 00:49:23.753 [2024-07-23 09:02:36.121928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.753 [2024-07-23 09:02:36.121980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:49:23.753 [2024-07-23 09:02:36.143033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f92c0 00:49:23.754 [2024-07-23 09:02:36.146385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.754 [2024-07-23 09:02:36.146439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:23.754 [2024-07-23 09:02:36.161459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:49:23.754 [2024-07-23 09:02:36.162993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.754 [2024-07-23 09:02:36.163046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:23.754 [2024-07-23 09:02:36.181804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:49:23.754 [2024-07-23 09:02:36.183624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.754 [2024-07-23 09:02:36.183676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:23.754 [2024-07-23 09:02:36.200486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:49:23.754 [2024-07-23 09:02:36.202232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.754 [2024-07-23 09:02:36.202284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:49:23.754 [2024-07-23 09:02:36.222279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:49:23.754 [2024-07-23 09:02:36.224332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.754 [2024-07-23 09:02:36.224385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:23.754 [2024-07-23 09:02:36.242392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:49:23.754 [2024-07-23 09:02:36.244687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.754 [2024-07-23 09:02:36.244739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:23.754 [2024-07-23 09:02:36.260836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6890 00:49:23.754 [2024-07-23 09:02:36.263102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:23.754 [2024-07-23 09:02:36.263156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:49:24.014 [2024-07-23 09:02:36.278920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f57b0 00:49:24.015 [2024-07-23 09:02:36.280402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.280457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.298719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4f40 00:49:24.015 [2024-07-23 09:02:36.300022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.300076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.318593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fac10 00:49:24.015 [2024-07-23 09:02:36.320357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.320409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.338004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e27f0 00:49:24.015 [2024-07-23 09:02:36.339769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.339830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.357427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6300 00:49:24.015 [2024-07-23 09:02:36.359196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.359248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.376976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:49:24.015 [2024-07-23 09:02:36.378755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.378807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.396461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195dfdc0 00:49:24.015 [2024-07-23 09:02:36.398205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.398256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.415894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaef0 00:49:24.015 [2024-07-23 09:02:36.417666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.417717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.435380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea680 00:49:24.015 [2024-07-23 09:02:36.437144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.437196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.454765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1f80 00:49:24.015 [2024-07-23 09:02:36.456525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.456577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.474166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e38d0 00:49:24.015 [2024-07-23 09:02:36.475947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.475998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.493679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df550 00:49:24.015 [2024-07-23 09:02:36.495420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.495472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.513185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:49:24.015 [2024-07-23 09:02:36.514909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.514963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.015 [2024-07-23 09:02:36.532824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9e10 00:49:24.015 [2024-07-23 09:02:36.534577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.015 [2024-07-23 09:02:36.534630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.555036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef270 00:49:24.276 [2024-07-23 09:02:36.558005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.558059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.573514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:49:24.276 [2024-07-23 09:02:36.575531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.575596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.591701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195efae0 00:49:24.276 [2024-07-23 09:02:36.594784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.594838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.610163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:49:24.276 [2024-07-23 09:02:36.611442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.611495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.630080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb760 00:49:24.276 [2024-07-23 09:02:36.631370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.631423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.650019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:49:24.276 [2024-07-23 09:02:36.651288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.651350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.671982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:49:24.276 [2024-07-23 09:02:36.675242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.675304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:24.276 [2024-07-23 09:02:36.690409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f35f0 00:49:24.276 [2024-07-23 09:02:36.691958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:24.276 [2024-07-23 09:02:36.692011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:49:24.276 00:49:24.276 Latency(us) 00:49:24.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:24.276 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:49:24.276 nvme0n1 : 2.01 12908.84 50.43 0.00 0.00 9901.10 4369.07 24660.95 00:49:24.277 =================================================================================================================== 00:49:24.277 Total : 12908.84 50.43 0.00 0.00 9901.10 4369.07 24660.95 00:49:24.277 0 00:49:24.277 09:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:49:24.277 09:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:49:24.277 09:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:49:24.277 | .driver_specific 00:49:24.277 | .nvme_error 00:49:24.277 | .status_code 00:49:24.277 | .command_transient_transport_error' 00:49:24.277 09:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 101 > 0 )) 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2544398 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2544398 ']' 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2544398 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2544398 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2544398' 00:49:24.848 killing process with pid 2544398 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2544398 00:49:24.848 Received shutdown signal, test time was about 2.000000 seconds 00:49:24.848 00:49:24.848 Latency(us) 00:49:24.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:24.848 =================================================================================================================== 00:49:24.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:24.848 09:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2544398 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2545196 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2545196 /var/tmp/bperf.sock 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2545196 ']' 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:26.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:26.227 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:26.487 [2024-07-23 09:02:38.749070] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:26.487 [2024-07-23 09:02:38.749246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545196 ] 00:49:26.487 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:26.487 Zero copy mechanism will not be used. 00:49:26.487 EAL: No free 2048 kB hugepages reported on node 1 00:49:26.487 [2024-07-23 09:02:38.908992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:26.747 [2024-07-23 09:02:39.222722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:27.714 09:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:27.714 09:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:49:27.714 09:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:27.714 09:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:49:27.714 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:49:27.714 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:27.714 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:27.714 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:27.714 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:27.714 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:49:28.283 nvme0n1 00:49:28.283 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:49:28.283 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:28.283 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:28.283 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:28.283 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:49:28.284 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:28.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:49:28.544 Zero copy mechanism will not be used. 00:49:28.544 Running I/O for 2 seconds... 00:49:28.544 [2024-07-23 09:02:40.980115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:40.980709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:40.980781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:28.544 [2024-07-23 09:02:40.992525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:40.993125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:40.993182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:28.544 [2024-07-23 09:02:41.004866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:41.005437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:41.005492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:28.544 [2024-07-23 09:02:41.017117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:41.017678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:41.017734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:28.544 [2024-07-23 09:02:41.028927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:41.029499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:41.029555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:28.544 [2024-07-23 09:02:41.040702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:41.041260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:41.041327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:28.544 [2024-07-23 09:02:41.052602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:41.053163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:41.053218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:28.544 [2024-07-23 09:02:41.064288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.544 [2024-07-23 09:02:41.064831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.544 [2024-07-23 09:02:41.064887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.076362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.076908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.076964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.088497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.089061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.089116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.100670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.101213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.101267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.112741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.113265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.113328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.124685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.125233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.125287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.136288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.136464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.136515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.148635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.149189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.149243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.160869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.161427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.161493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.172818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.173348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.805 [2024-07-23 09:02:41.173423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:28.805 [2024-07-23 09:02:41.184597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.805 [2024-07-23 09:02:41.185152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.185206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.196820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.197384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.197438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.209147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.209665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.209721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.221342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.221835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.221889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.233846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.234349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.234404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.245786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.245999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.246050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.258185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.258689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.258744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.270425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.270976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.271030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.282641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.283134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.283189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.294718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.295290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.295353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.306911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.307391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.307445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:28.806 [2024-07-23 09:02:41.319492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:28.806 [2024-07-23 09:02:41.320065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:28.806 [2024-07-23 09:02:41.320118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.066 [2024-07-23 09:02:41.331646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.066 [2024-07-23 09:02:41.332193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.332248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.343849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.344409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.344464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.355854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.356416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.356470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.367907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.368469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.368533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.380390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.380915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.380970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.391932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.392447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.392501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.404146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.404647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.404702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.413883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.414415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.414470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.425447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.425962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.426017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.436400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.436934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.436990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.447651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.448207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.448261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.459292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.459857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.459912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.471063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.471670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.471726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.483432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.483945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.483998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.494912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.495423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.495480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.506618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.507143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.507200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.518258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.518452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.518506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.529415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.529892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.529946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.540386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.540898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.540952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.551528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.551693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.551745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.562643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.563167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.563230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.573929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.574448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.574502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.067 [2024-07-23 09:02:41.585452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.067 [2024-07-23 09:02:41.585972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.067 [2024-07-23 09:02:41.586026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.596048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.596627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.596682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.606992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.607486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.607541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.617547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.618056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.618109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.628994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.629555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.629619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.640522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.641012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.641066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.651995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.652486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.652541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.663440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.663939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.663992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.675093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.675565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.675621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.685754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.686147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.686202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.328 [2024-07-23 09:02:41.696408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.328 [2024-07-23 09:02:41.696803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.328 [2024-07-23 09:02:41.696857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.706664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.707059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.707113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.717217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.717612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.717667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.727461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.727865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.727918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.738085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.738504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.738557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.748770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.749168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.749222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.759531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.759940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.759993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.770289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.770699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.770752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.780698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.781087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.781140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.791437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.791830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.791882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.802711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.803103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.803156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.813321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.813718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.813772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.823986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.824391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.824445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.835178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.835587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.835642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.329 [2024-07-23 09:02:41.846163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.329 [2024-07-23 09:02:41.846586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.329 [2024-07-23 09:02:41.846641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.589 [2024-07-23 09:02:41.857134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.857561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.857616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.867978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.868386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.868441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.878505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.878908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.878963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.889602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.889995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.890049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.901233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.901670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.901725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.912145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.912585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.912640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.923485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.923879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.923932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.934724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.935153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.935206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.945973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.946507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.946560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.957270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.957796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.957850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.968413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.968808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.968861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.979180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.979584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.979638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.988518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.988899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.988951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:41.997177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:41.997564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:41.997618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.005588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.005963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.006016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.014452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.014871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.014924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.023099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.023508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.023581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.033476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.033882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.033936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.043657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.044054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.044108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.054218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.054634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.054688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.063305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.063701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.063756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.071748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.072137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.072190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.080260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.080649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.080702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.089468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.089852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.089908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.097977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.098370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.590 [2024-07-23 09:02:42.098424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.590 [2024-07-23 09:02:42.107455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.590 [2024-07-23 09:02:42.107917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.591 [2024-07-23 09:02:42.107970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.851 [2024-07-23 09:02:42.117373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.851 [2024-07-23 09:02:42.117773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.851 [2024-07-23 09:02:42.117826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.851 [2024-07-23 09:02:42.127814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.851 [2024-07-23 09:02:42.128206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.851 [2024-07-23 09:02:42.128261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.851 [2024-07-23 09:02:42.137561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.851 [2024-07-23 09:02:42.137931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.851 [2024-07-23 09:02:42.137983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.851 [2024-07-23 09:02:42.147449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.851 [2024-07-23 09:02:42.147824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.851 [2024-07-23 09:02:42.147878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.851 [2024-07-23 09:02:42.157783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.851 [2024-07-23 09:02:42.158178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.851 [2024-07-23 09:02:42.158233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.851 [2024-07-23 09:02:42.168178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.851 [2024-07-23 09:02:42.168589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.168643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.178772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.179244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.179298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.189535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.189940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.190004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.200365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.200808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.200861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.211424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.211843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.211896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.222528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.222958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.223011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.233843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.234291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.234356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.244526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.244935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.244989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.255251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.255681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.255735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.266401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.266799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.266853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.277283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.277821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.277874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.288422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.288818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.288872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.299233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.299643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.299697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.310217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.310617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.310670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.321139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.321659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.321713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.332159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.332592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.332646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.343188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.343593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.343648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.354250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.354790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.354844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:29.852 [2024-07-23 09:02:42.365457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:29.852 [2024-07-23 09:02:42.365956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:29.852 [2024-07-23 09:02:42.366009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.376990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.377396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.377460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.387875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.388283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.388348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.398547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.398972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.399026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.409812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.410206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.410260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.420718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.421134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.421188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.431849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.432373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.432426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.443221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.443624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.443678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.454006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.454561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.454614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.464723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.465128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.465182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.475606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.476094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.476147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.486696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.487102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.487155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.497901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.498369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.498423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.508837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.509236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.509291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.520008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.520470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.520524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.531385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.531779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.531832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.542299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.542739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.542792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.553406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.553806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.553858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.564653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.565080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.565144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.575894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.113 [2024-07-23 09:02:42.576375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.113 [2024-07-23 09:02:42.576428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.113 [2024-07-23 09:02:42.587083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.114 [2024-07-23 09:02:42.587610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.114 [2024-07-23 09:02:42.587664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.114 [2024-07-23 09:02:42.597988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.114 [2024-07-23 09:02:42.598531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.114 [2024-07-23 09:02:42.598584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.114 [2024-07-23 09:02:42.610211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.114 [2024-07-23 09:02:42.610623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.114 [2024-07-23 09:02:42.610677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.114 [2024-07-23 09:02:42.620895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.114 [2024-07-23 09:02:42.621332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.114 [2024-07-23 09:02:42.621398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.114 [2024-07-23 09:02:42.630845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.114 [2024-07-23 09:02:42.631233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.114 [2024-07-23 09:02:42.631287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.641227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.641693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.641748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.651575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.651996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.652051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.662202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.662668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.662723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.672702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.673110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.673165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.682338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.682745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.682798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.692135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.692619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.692673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.701860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.702332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.702397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.711249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.711643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.711698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.720943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.721410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.721464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.730420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.730811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.730866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.740007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.740475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.740529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.749774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.750242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.750296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.759915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.760331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.760396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.770448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.770850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.770905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.780976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.781392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.781447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.792643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.793040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.793095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.803723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.804125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.804181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.814416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.814815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.814870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.824244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.824660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.375 [2024-07-23 09:02:42.824714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.375 [2024-07-23 09:02:42.834278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.375 [2024-07-23 09:02:42.834697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.376 [2024-07-23 09:02:42.834751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.376 [2024-07-23 09:02:42.844759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.376 [2024-07-23 09:02:42.845158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.376 [2024-07-23 09:02:42.845212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.376 [2024-07-23 09:02:42.855754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.376 [2024-07-23 09:02:42.856156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.376 [2024-07-23 09:02:42.856210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.376 [2024-07-23 09:02:42.866030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.376 [2024-07-23 09:02:42.866445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.376 [2024-07-23 09:02:42.866500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.376 [2024-07-23 09:02:42.876818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.376 [2024-07-23 09:02:42.877226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.376 [2024-07-23 09:02:42.877280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.376 [2024-07-23 09:02:42.887197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.376 [2024-07-23 09:02:42.887604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.376 [2024-07-23 09:02:42.887658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.898189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.898602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.898658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.908880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.909283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.909349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.919283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.919702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.919756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.929996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.930410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.930464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.940611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.941010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.941065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.951292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.951706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.951760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.961862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.962264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.962330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:49:30.635 [2024-07-23 09:02:42.972103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:49:30.635 [2024-07-23 09:02:42.972421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:30.635 [2024-07-23 09:02:42.972475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:49:30.635 00:49:30.635 Latency(us) 00:49:30.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:30.635 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:49:30.635 nvme0n1 : 2.01 2832.66 354.08 0.00 0.00 5630.54 3883.61 12913.02 00:49:30.635 =================================================================================================================== 00:49:30.635 Total : 2832.66 354.08 0.00 0.00 5630.54 3883.61 12913.02 00:49:30.635 0 00:49:30.635 09:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:49:30.636 09:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:49:30.636 09:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:49:30.636 | .driver_specific 00:49:30.636 | .nvme_error 00:49:30.636 | .status_code 00:49:30.636 | .command_transient_transport_error' 00:49:30.636 09:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2545196 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2545196 ']' 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2545196 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2545196 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2545196' 00:49:31.206 killing process with pid 2545196 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2545196 00:49:31.206 Received shutdown signal, test time was about 2.000000 seconds 00:49:31.206 00:49:31.206 Latency(us) 00:49:31.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:31.206 =================================================================================================================== 00:49:31.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:31.206 09:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2545196 00:49:32.587 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2542630 00:49:32.587 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2542630 ']' 00:49:32.587 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2542630 00:49:32.587 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:49:32.587 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:49:32.587 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2542630 00:49:32.587 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:49:32.587 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:49:32.587 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2542630' 00:49:32.587 killing process with pid 2542630 00:49:32.587 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2542630 00:49:32.587 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2542630 00:49:35.129 00:49:35.129 real 0m32.790s 00:49:35.129 user 1m5.981s 00:49:35.129 sys 0m6.906s 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:49:35.129 ************************************ 00:49:35.129 END TEST nvmf_digest_error 00:49:35.129 ************************************ 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:35.129 rmmod nvme_tcp 00:49:35.129 rmmod nvme_fabrics 00:49:35.129 rmmod nvme_keyring 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2542630 ']' 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2542630 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2542630 ']' 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2542630 00:49:35.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2542630) - No such process 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2542630 is not found' 00:49:35.129 Process with pid 2542630 is not found 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:35.129 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:49:37.671 00:49:37.671 real 1m11.963s 00:49:37.671 user 2m13.607s 00:49:37.671 sys 0m16.405s 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:49:37.671 ************************************ 00:49:37.671 END TEST nvmf_digest 00:49:37.671 ************************************ 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:37.671 ************************************ 00:49:37.671 START TEST nvmf_bdevperf 00:49:37.671 ************************************ 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:49:37.671 * Looking for test storage... 00:49:37.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:49:37.671 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:49:37.672 09:02:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:49:40.966 Found 0000:84:00.0 (0x8086 - 0x159b) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:49:40.966 Found 0000:84:00.1 (0x8086 - 0x159b) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:49:40.966 Found net devices under 0000:84:00.0: cvl_0_0 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:49:40.966 Found net devices under 0000:84:00.1: cvl_0_1 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:49:40.966 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:49:40.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:40.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:49:40.967 00:49:40.967 --- 10.0.0.2 ping statistics --- 00:49:40.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:40.967 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:40.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:40.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:49:40.967 00:49:40.967 --- 10.0.0.1 ping statistics --- 00:49:40.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:40.967 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:40.967 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2548211 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2548211 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2548211 ']' 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:41.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:41.227 09:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:41.227 [2024-07-23 09:02:53.714939] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:41.227 [2024-07-23 09:02:53.715266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:41.486 EAL: No free 2048 kB hugepages reported on node 1 00:49:41.486 [2024-07-23 09:02:53.999705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:42.054 [2024-07-23 09:02:54.321781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:42.054 [2024-07-23 09:02:54.321864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:42.054 [2024-07-23 09:02:54.321907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:42.054 [2024-07-23 09:02:54.321935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:42.055 [2024-07-23 09:02:54.321961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:42.055 [2024-07-23 09:02:54.322178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:49:42.055 [2024-07-23 09:02:54.322251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:42.055 [2024-07-23 09:02:54.322267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:42.990 [2024-07-23 09:02:55.294195] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:42.990 Malloc0 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:42.990 [2024-07-23 09:02:55.440797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:49:42.990 { 00:49:42.990 "params": { 00:49:42.990 "name": "Nvme$subsystem", 00:49:42.990 "trtype": "$TEST_TRANSPORT", 00:49:42.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:49:42.990 "adrfam": "ipv4", 00:49:42.990 "trsvcid": "$NVMF_PORT", 00:49:42.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:49:42.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:49:42.990 "hdgst": ${hdgst:-false}, 00:49:42.990 "ddgst": ${ddgst:-false} 00:49:42.990 }, 00:49:42.990 "method": "bdev_nvme_attach_controller" 00:49:42.990 } 00:49:42.990 EOF 00:49:42.990 )") 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:49:42.990 09:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:49:42.990 "params": { 00:49:42.990 "name": "Nvme1", 00:49:42.990 "trtype": "tcp", 00:49:42.990 "traddr": "10.0.0.2", 00:49:42.990 "adrfam": "ipv4", 00:49:42.990 "trsvcid": "4420", 00:49:42.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:49:42.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:49:42.990 "hdgst": false, 00:49:42.990 "ddgst": false 00:49:42.990 }, 00:49:42.990 "method": "bdev_nvme_attach_controller" 00:49:42.990 }' 00:49:43.249 [2024-07-23 09:02:55.619635] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:43.249 [2024-07-23 09:02:55.619963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548487 ] 00:49:43.507 EAL: No free 2048 kB hugepages reported on node 1 00:49:43.507 [2024-07-23 09:02:55.879936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:43.765 [2024-07-23 09:02:56.191793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:49:44.737 Running I/O for 1 seconds... 00:49:45.671 00:49:45.671 Latency(us) 00:49:45.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:45.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:49:45.671 Verification LBA range: start 0x0 length 0x4000 00:49:45.671 Nvme1n1 : 1.03 4714.01 18.41 0.00 0.00 27003.40 5946.79 22233.69 00:49:45.671 =================================================================================================================== 00:49:45.671 Total : 4714.01 18.41 0.00 0.00 27003.40 5946.79 22233.69 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2548880 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:49:47.044 { 00:49:47.044 "params": { 00:49:47.044 "name": "Nvme$subsystem", 00:49:47.044 "trtype": "$TEST_TRANSPORT", 00:49:47.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:49:47.044 "adrfam": "ipv4", 00:49:47.044 "trsvcid": "$NVMF_PORT", 00:49:47.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:49:47.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:49:47.044 "hdgst": ${hdgst:-false}, 00:49:47.044 "ddgst": ${ddgst:-false} 00:49:47.044 }, 00:49:47.044 "method": "bdev_nvme_attach_controller" 00:49:47.044 } 00:49:47.044 EOF 00:49:47.044 )") 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:49:47.044 09:02:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:49:47.044 "params": { 00:49:47.044 "name": "Nvme1", 00:49:47.044 "trtype": "tcp", 00:49:47.044 "traddr": "10.0.0.2", 00:49:47.044 "adrfam": "ipv4", 00:49:47.044 "trsvcid": "4420", 00:49:47.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:49:47.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:49:47.044 "hdgst": false, 00:49:47.044 "ddgst": false 00:49:47.044 }, 00:49:47.044 "method": "bdev_nvme_attach_controller" 00:49:47.044 }' 00:49:47.044 [2024-07-23 09:02:59.331035] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:47.044 [2024-07-23 09:02:59.331375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548880 ] 00:49:47.044 EAL: No free 2048 kB hugepages reported on node 1 00:49:47.302 [2024-07-23 09:02:59.569334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:47.560 [2024-07-23 09:02:59.882567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:49:48.126 Running I/O for 15 seconds... 00:49:50.033 09:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2548211 00:49:50.033 09:03:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:49:50.033 [2024-07-23 09:03:02.269070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.033 [2024-07-23 09:03:02.269217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.269382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.269421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.269459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.269491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.269524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.269554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.269623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.269680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.269741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.269799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.269861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.269914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.270932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.270985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.271924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.271977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.272938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.272996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.034 [2024-07-23 09:03:02.273050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.034 [2024-07-23 09:03:02.273116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.273907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.273958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.274913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.274964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.275917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.275968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.276076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.276182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.276290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.276415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.276496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.276556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.035 [2024-07-23 09:03:02.276670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.035 [2024-07-23 09:03:02.276780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.035 [2024-07-23 09:03:02.276890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.035 [2024-07-23 09:03:02.276958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.277012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.277118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.277224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.277362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.277443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.277502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.277562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.277648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.277760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.277866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.277921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.277973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.278919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.278975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.279026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.279134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.279242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.279383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.279446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.279512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:50.036 [2024-07-23 09:03:02.279602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.279713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.279820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.279928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.279984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.036 [2024-07-23 09:03:02.280718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.036 [2024-07-23 09:03:02.280777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.280830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.280885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.280937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.280992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.281921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.281978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:50.037 [2024-07-23 09:03:02.282040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.282094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7c80 is same with the state(5) to be set 00:49:50.037 [2024-07-23 09:03:02.282155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:50.037 [2024-07-23 09:03:02.282201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:50.037 [2024-07-23 09:03:02.282247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67560 len:8 PRP1 0x0 PRP2 0x0 00:49:50.037 [2024-07-23 09:03:02.282296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.282884] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f7c80 was disconnected and freed. reset controller. 00:49:50.037 [2024-07-23 09:03:02.283126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.037 [2024-07-23 09:03:02.283210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.283276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.037 [2024-07-23 09:03:02.283368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.283402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.037 [2024-07-23 09:03:02.283430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.283457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:50.037 [2024-07-23 09:03:02.283484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:50.037 [2024-07-23 09:03:02.283510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.037 [2024-07-23 09:03:02.291760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.037 [2024-07-23 09:03:02.291937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.037 [2024-07-23 09:03:02.293449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.037 [2024-07-23 09:03:02.293505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.037 [2024-07-23 09:03:02.293539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.037 [2024-07-23 09:03:02.294205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.037 [2024-07-23 09:03:02.294726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.037 [2024-07-23 09:03:02.294802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.037 [2024-07-23 09:03:02.294858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.037 [2024-07-23 09:03:02.302723] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.037 [2024-07-23 09:03:02.303328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.037 [2024-07-23 09:03:02.321507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.037 [2024-07-23 09:03:02.322451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.037 [2024-07-23 09:03:02.322503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.037 [2024-07-23 09:03:02.322535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.037 [2024-07-23 09:03:02.323184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.037 [2024-07-23 09:03:02.323709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.037 [2024-07-23 09:03:02.323781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.037 [2024-07-23 09:03:02.323830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.037 [2024-07-23 09:03:02.331967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.037 [2024-07-23 09:03:02.340549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.037 [2024-07-23 09:03:02.341585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.037 [2024-07-23 09:03:02.341691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.037 [2024-07-23 09:03:02.341748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.037 [2024-07-23 09:03:02.342409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.037 [2024-07-23 09:03:02.342933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.037 [2024-07-23 09:03:02.343003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.037 [2024-07-23 09:03:02.343051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.037 [2024-07-23 09:03:02.350770] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.037 [2024-07-23 09:03:02.351377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.037 [2024-07-23 09:03:02.369609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.037 [2024-07-23 09:03:02.370618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.037 [2024-07-23 09:03:02.370712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.037 [2024-07-23 09:03:02.370770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.037 [2024-07-23 09:03:02.371432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.037 [2024-07-23 09:03:02.371971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.037 [2024-07-23 09:03:02.372042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.037 [2024-07-23 09:03:02.372092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.037 [2024-07-23 09:03:02.380006] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.037 [2024-07-23 09:03:02.380508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.037 [2024-07-23 09:03:02.398958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.037 [2024-07-23 09:03:02.399833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.037 [2024-07-23 09:03:02.399928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.038 [2024-07-23 09:03:02.399986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.038 [2024-07-23 09:03:02.400533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.038 [2024-07-23 09:03:02.401156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.038 [2024-07-23 09:03:02.401227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.038 [2024-07-23 09:03:02.401275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.038 [2024-07-23 09:03:02.409393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.038 [2024-07-23 09:03:02.417981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.038 [2024-07-23 09:03:02.418820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.038 [2024-07-23 09:03:02.418912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.038 [2024-07-23 09:03:02.418969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.038 [2024-07-23 09:03:02.419521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.038 [2024-07-23 09:03:02.420129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.038 [2024-07-23 09:03:02.420200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.038 [2024-07-23 09:03:02.420249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.038 [2024-07-23 09:03:02.428050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.038 [2024-07-23 09:03:02.436368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.038 [2024-07-23 09:03:02.437092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.038 [2024-07-23 09:03:02.437184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.038 [2024-07-23 09:03:02.437241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.038 [2024-07-23 09:03:02.437739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.038 [2024-07-23 09:03:02.438141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.038 [2024-07-23 09:03:02.438180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.038 [2024-07-23 09:03:02.438207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.038 [2024-07-23 09:03:02.445887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.038 [2024-07-23 09:03:02.455222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.038 [2024-07-23 09:03:02.456035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.038 [2024-07-23 09:03:02.456127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.038 [2024-07-23 09:03:02.456183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.038 [2024-07-23 09:03:02.456724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.038 [2024-07-23 09:03:02.457394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.038 [2024-07-23 09:03:02.457433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.038 [2024-07-23 09:03:02.457460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.038 [2024-07-23 09:03:02.465668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.038 [2024-07-23 09:03:02.474463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.038 [2024-07-23 09:03:02.475395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.038 [2024-07-23 09:03:02.475445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.038 [2024-07-23 09:03:02.475476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.038 [2024-07-23 09:03:02.476057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.038 [2024-07-23 09:03:02.476597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.038 [2024-07-23 09:03:02.476670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.038 [2024-07-23 09:03:02.476718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.038 [2024-07-23 09:03:02.484722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.038 [2024-07-23 09:03:02.493260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.038 [2024-07-23 09:03:02.493941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.038 [2024-07-23 09:03:02.494030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.038 [2024-07-23 09:03:02.494087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.039 [2024-07-23 09:03:02.494601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.039 [2024-07-23 09:03:02.495249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.039 [2024-07-23 09:03:02.495338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.039 [2024-07-23 09:03:02.495399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.039 [2024-07-23 09:03:02.503483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.039 [2024-07-23 09:03:02.512056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.039 [2024-07-23 09:03:02.512956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.039 [2024-07-23 09:03:02.513046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.039 [2024-07-23 09:03:02.513105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.039 [2024-07-23 09:03:02.513613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.039 [2024-07-23 09:03:02.514265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.039 [2024-07-23 09:03:02.514368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.039 [2024-07-23 09:03:02.514409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.039 [2024-07-23 09:03:02.522484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.039 [2024-07-23 09:03:02.529300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.039 [2024-07-23 09:03:02.530096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.039 [2024-07-23 09:03:02.530187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.039 [2024-07-23 09:03:02.530243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.039 [2024-07-23 09:03:02.530664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.039 [2024-07-23 09:03:02.531146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.039 [2024-07-23 09:03:02.531218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.039 [2024-07-23 09:03:02.531266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.039 [2024-07-23 09:03:02.538522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.039 [2024-07-23 09:03:02.547173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.039 [2024-07-23 09:03:02.548044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.039 [2024-07-23 09:03:02.548147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.039 [2024-07-23 09:03:02.548207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.039 [2024-07-23 09:03:02.548729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.039 [2024-07-23 09:03:02.549395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.039 [2024-07-23 09:03:02.549435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.039 [2024-07-23 09:03:02.549462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.299 [2024-07-23 09:03:02.557138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.299 [2024-07-23 09:03:02.565389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.299 [2024-07-23 09:03:02.566230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.299 [2024-07-23 09:03:02.566365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.299 [2024-07-23 09:03:02.566402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.299 [2024-07-23 09:03:02.566920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.299 [2024-07-23 09:03:02.567501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.299 [2024-07-23 09:03:02.567541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.299 [2024-07-23 09:03:02.567597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.299 [2024-07-23 09:03:02.575544] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.299 [2024-07-23 09:03:02.576126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.299 [2024-07-23 09:03:02.594549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.299 [2024-07-23 09:03:02.595508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.299 [2024-07-23 09:03:02.595560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.299 [2024-07-23 09:03:02.595616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.299 [2024-07-23 09:03:02.596259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.299 [2024-07-23 09:03:02.596731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.299 [2024-07-23 09:03:02.596804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.299 [2024-07-23 09:03:02.596851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.299 [2024-07-23 09:03:02.604528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.299 [2024-07-23 09:03:02.613673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.299 [2024-07-23 09:03:02.614598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.299 [2024-07-23 09:03:02.614688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.299 [2024-07-23 09:03:02.614747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.299 [2024-07-23 09:03:02.615411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.299 [2024-07-23 09:03:02.615960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.299 [2024-07-23 09:03:02.616031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.299 [2024-07-23 09:03:02.616080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.299 [2024-07-23 09:03:02.624214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.299 [2024-07-23 09:03:02.632847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.299 [2024-07-23 09:03:02.633793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.299 [2024-07-23 09:03:02.633888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.299 [2024-07-23 09:03:02.633946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.299 [2024-07-23 09:03:02.634524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.299 [2024-07-23 09:03:02.635148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.299 [2024-07-23 09:03:02.635219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.299 [2024-07-23 09:03:02.635268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.299 [2024-07-23 09:03:02.643421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.299 [2024-07-23 09:03:02.651901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.299 [2024-07-23 09:03:02.652730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.299 [2024-07-23 09:03:02.652823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.299 [2024-07-23 09:03:02.652894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.299 [2024-07-23 09:03:02.653488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.299 [2024-07-23 09:03:02.654018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.654089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.654137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.662255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.670892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.671751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.671843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.671901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.672482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.673036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.673107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.673155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.681168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.689732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.690546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.690596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.690660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.691298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.691799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.691870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.691918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.700178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.708700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.709540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.709631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.709688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.710362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.710846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.710929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.710981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.719232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.727665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.728656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.728750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.728807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.729444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.729972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.730042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.730091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.738367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.746945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.747839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.747932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.747989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.748531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.749182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.749252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.749299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.757276] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.300 [2024-07-23 09:03:02.757743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.776933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.777941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.778034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.778092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.778630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.779277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.779368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.779422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.787096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.795580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.796462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.796511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.796543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.797158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.797717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.797791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.797842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.300 [2024-07-23 09:03:02.805818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.300 [2024-07-23 09:03:02.814428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.300 [2024-07-23 09:03:02.815217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.300 [2024-07-23 09:03:02.815287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.300 [2024-07-23 09:03:02.815359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.300 [2024-07-23 09:03:02.815995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.300 [2024-07-23 09:03:02.816541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.300 [2024-07-23 09:03:02.816582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.300 [2024-07-23 09:03:02.816609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.560 [2024-07-23 09:03:02.824592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.560 [2024-07-23 09:03:02.833363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.560 [2024-07-23 09:03:02.834232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.560 [2024-07-23 09:03:02.834368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.560 [2024-07-23 09:03:02.834406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.560 [2024-07-23 09:03:02.834906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.560 [2024-07-23 09:03:02.835591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.560 [2024-07-23 09:03:02.835662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.560 [2024-07-23 09:03:02.835713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.560 [2024-07-23 09:03:02.843901] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.560 [2024-07-23 09:03:02.844913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.560 [2024-07-23 09:03:02.862857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.560 [2024-07-23 09:03:02.863774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.560 [2024-07-23 09:03:02.863868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.560 [2024-07-23 09:03:02.863925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.560 [2024-07-23 09:03:02.864517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.560 [2024-07-23 09:03:02.865121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.560 [2024-07-23 09:03:02.865192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.560 [2024-07-23 09:03:02.865274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.560 [2024-07-23 09:03:02.873065] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.560 [2024-07-23 09:03:02.873534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.560 [2024-07-23 09:03:02.892614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.560 [2024-07-23 09:03:02.893554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.560 [2024-07-23 09:03:02.893616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.560 [2024-07-23 09:03:02.893687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.560 [2024-07-23 09:03:02.894365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.560 [2024-07-23 09:03:02.894861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.560 [2024-07-23 09:03:02.894932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.560 [2024-07-23 09:03:02.894980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.560 [2024-07-23 09:03:02.903059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.560 [2024-07-23 09:03:02.911553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.560 [2024-07-23 09:03:02.912462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.560 [2024-07-23 09:03:02.912513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.560 [2024-07-23 09:03:02.912545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.560 [2024-07-23 09:03:02.913161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.560 [2024-07-23 09:03:02.913663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.560 [2024-07-23 09:03:02.913736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:02.913785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:02.921817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.561 [2024-07-23 09:03:02.930440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.561 [2024-07-23 09:03:02.931356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.561 [2024-07-23 09:03:02.931406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.561 [2024-07-23 09:03:02.931445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.561 [2024-07-23 09:03:02.931973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.561 [2024-07-23 09:03:02.932541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.561 [2024-07-23 09:03:02.932581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:02.932625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:02.940648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.561 [2024-07-23 09:03:02.949306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.561 [2024-07-23 09:03:02.950127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.561 [2024-07-23 09:03:02.950217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.561 [2024-07-23 09:03:02.950274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.561 [2024-07-23 09:03:02.950807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.561 [2024-07-23 09:03:02.951436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.561 [2024-07-23 09:03:02.951475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:02.951502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:02.959535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.561 [2024-07-23 09:03:02.968103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.561 [2024-07-23 09:03:02.969008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.561 [2024-07-23 09:03:02.969099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.561 [2024-07-23 09:03:02.969156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.561 [2024-07-23 09:03:02.969620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.561 [2024-07-23 09:03:02.970254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.561 [2024-07-23 09:03:02.970353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:02.970384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:02.978633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.561 [2024-07-23 09:03:02.988519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.561 [2024-07-23 09:03:02.989469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.561 [2024-07-23 09:03:02.989561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.561 [2024-07-23 09:03:02.989618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.561 [2024-07-23 09:03:02.990261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.561 [2024-07-23 09:03:02.990938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.561 [2024-07-23 09:03:02.991009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:02.991057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:02.998816] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.561 [2024-07-23 09:03:02.999400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.561 [2024-07-23 09:03:03.019055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.561 [2024-07-23 09:03:03.019947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.561 [2024-07-23 09:03:03.020040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.561 [2024-07-23 09:03:03.020097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.561 [2024-07-23 09:03:03.020632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.561 [2024-07-23 09:03:03.021280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.561 [2024-07-23 09:03:03.021370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:03.021432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:03.029531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.561 [2024-07-23 09:03:03.038085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.561 [2024-07-23 09:03:03.038970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.561 [2024-07-23 09:03:03.039062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.561 [2024-07-23 09:03:03.039119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.561 [2024-07-23 09:03:03.039613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.561 [2024-07-23 09:03:03.040263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.561 [2024-07-23 09:03:03.040364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:03.040394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:03.048737] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.561 [2024-07-23 09:03:03.049332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.561 [2024-07-23 09:03:03.068194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.561 [2024-07-23 09:03:03.069022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.561 [2024-07-23 09:03:03.069114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.561 [2024-07-23 09:03:03.069170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.561 [2024-07-23 09:03:03.069700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.561 [2024-07-23 09:03:03.070368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.561 [2024-07-23 09:03:03.070432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.561 [2024-07-23 09:03:03.070461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.561 [2024-07-23 09:03:03.078452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.822 [2024-07-23 09:03:03.087183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.822 [2024-07-23 09:03:03.088047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.822 [2024-07-23 09:03:03.088147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.822 [2024-07-23 09:03:03.088207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.822 [2024-07-23 09:03:03.088742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.822 [2024-07-23 09:03:03.089418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.822 [2024-07-23 09:03:03.089457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.822 [2024-07-23 09:03:03.089485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.822 [2024-07-23 09:03:03.097571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.822 [2024-07-23 09:03:03.106853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.822 [2024-07-23 09:03:03.107849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.822 [2024-07-23 09:03:03.107942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.822 [2024-07-23 09:03:03.108002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.822 [2024-07-23 09:03:03.108557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.822 [2024-07-23 09:03:03.109167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.822 [2024-07-23 09:03:03.109236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.822 [2024-07-23 09:03:03.109283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.822 [2024-07-23 09:03:03.117460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.822 [2024-07-23 09:03:03.126595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.822 [2024-07-23 09:03:03.127551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.822 [2024-07-23 09:03:03.127664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.822 [2024-07-23 09:03:03.127730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.822 [2024-07-23 09:03:03.128405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.822 [2024-07-23 09:03:03.128920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.822 [2024-07-23 09:03:03.128991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.822 [2024-07-23 09:03:03.129039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.822 [2024-07-23 09:03:03.137211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.822 [2024-07-23 09:03:03.146270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.822 [2024-07-23 09:03:03.147300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.822 [2024-07-23 09:03:03.147409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.822 [2024-07-23 09:03:03.147441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.822 [2024-07-23 09:03:03.147973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.822 [2024-07-23 09:03:03.148545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.822 [2024-07-23 09:03:03.148585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.822 [2024-07-23 09:03:03.148631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.822 [2024-07-23 09:03:03.156767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.822 [2024-07-23 09:03:03.166619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.822 [2024-07-23 09:03:03.167599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.822 [2024-07-23 09:03:03.167688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.822 [2024-07-23 09:03:03.167744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.822 [2024-07-23 09:03:03.168415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.822 [2024-07-23 09:03:03.169061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.822 [2024-07-23 09:03:03.169158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.169208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.176927] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.823 [2024-07-23 09:03:03.177461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.823 [2024-07-23 09:03:03.197176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.823 [2024-07-23 09:03:03.198051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.823 [2024-07-23 09:03:03.198143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.823 [2024-07-23 09:03:03.198199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.823 [2024-07-23 09:03:03.198737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.823 [2024-07-23 09:03:03.199403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.823 [2024-07-23 09:03:03.199442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.199469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.207499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.823 [2024-07-23 09:03:03.216483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.823 [2024-07-23 09:03:03.217406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.823 [2024-07-23 09:03:03.217500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.823 [2024-07-23 09:03:03.217572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.823 [2024-07-23 09:03:03.218213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.823 [2024-07-23 09:03:03.218752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.823 [2024-07-23 09:03:03.218826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.218874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.227027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.823 [2024-07-23 09:03:03.236061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.823 [2024-07-23 09:03:03.236900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.823 [2024-07-23 09:03:03.236993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.823 [2024-07-23 09:03:03.237051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.823 [2024-07-23 09:03:03.237581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.823 [2024-07-23 09:03:03.238228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.823 [2024-07-23 09:03:03.238297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.238370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.246473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.823 [2024-07-23 09:03:03.255756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.823 [2024-07-23 09:03:03.256722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.823 [2024-07-23 09:03:03.256814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.823 [2024-07-23 09:03:03.256871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.823 [2024-07-23 09:03:03.257491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.823 [2024-07-23 09:03:03.258076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.823 [2024-07-23 09:03:03.258146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.258195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.266397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.823 [2024-07-23 09:03:03.275601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.823 [2024-07-23 09:03:03.276868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.823 [2024-07-23 09:03:03.276960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.823 [2024-07-23 09:03:03.277016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.823 [2024-07-23 09:03:03.277563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.823 [2024-07-23 09:03:03.278213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.823 [2024-07-23 09:03:03.278284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.278351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.285910] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.823 [2024-07-23 09:03:03.286452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.823 [2024-07-23 09:03:03.304864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.823 [2024-07-23 09:03:03.305755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.823 [2024-07-23 09:03:03.305848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.823 [2024-07-23 09:03:03.305905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.823 [2024-07-23 09:03:03.306483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.823 [2024-07-23 09:03:03.307063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.823 [2024-07-23 09:03:03.307133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.307182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.315358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:50.823 [2024-07-23 09:03:03.324889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:50.823 [2024-07-23 09:03:03.325888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:50.823 [2024-07-23 09:03:03.325981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:50.823 [2024-07-23 09:03:03.326039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:50.823 [2024-07-23 09:03:03.326578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:50.823 [2024-07-23 09:03:03.327223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:50.823 [2024-07-23 09:03:03.327292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:50.823 [2024-07-23 09:03:03.327379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:50.823 [2024-07-23 09:03:03.335094] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:50.823 [2024-07-23 09:03:03.335565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.083 [2024-07-23 09:03:03.353443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.083 [2024-07-23 09:03:03.354353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.083 [2024-07-23 09:03:03.354406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.083 [2024-07-23 09:03:03.354439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.083 [2024-07-23 09:03:03.354964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.083 [2024-07-23 09:03:03.355519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.083 [2024-07-23 09:03:03.355590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.083 [2024-07-23 09:03:03.355643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.083 [2024-07-23 09:03:03.363635] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:51.083 [2024-07-23 09:03:03.364773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.083 [2024-07-23 09:03:03.383651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.083 [2024-07-23 09:03:03.384677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.083 [2024-07-23 09:03:03.384768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.083 [2024-07-23 09:03:03.384826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.083 [2024-07-23 09:03:03.385465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.083 [2024-07-23 09:03:03.386017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.083 [2024-07-23 09:03:03.386086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.083 [2024-07-23 09:03:03.386134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.083 [2024-07-23 09:03:03.393872] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:51.083 [2024-07-23 09:03:03.394434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.083 [2024-07-23 09:03:03.413635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.083 [2024-07-23 09:03:03.414613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.083 [2024-07-23 09:03:03.414704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.083 [2024-07-23 09:03:03.414762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.083 [2024-07-23 09:03:03.415433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.083 [2024-07-23 09:03:03.416079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.083 [2024-07-23 09:03:03.416149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.083 [2024-07-23 09:03:03.416197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.083 [2024-07-23 09:03:03.423919] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:51.083 [2024-07-23 09:03:03.424458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.083 [2024-07-23 09:03:03.442949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.083 [2024-07-23 09:03:03.443806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.083 [2024-07-23 09:03:03.443898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.083 [2024-07-23 09:03:03.443955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.083 [2024-07-23 09:03:03.444513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.083 [2024-07-23 09:03:03.445023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.083 [2024-07-23 09:03:03.445107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.083 [2024-07-23 09:03:03.445158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.083 [2024-07-23 09:03:03.452838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.083 [2024-07-23 09:03:03.462064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.083 [2024-07-23 09:03:03.462919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.083 [2024-07-23 09:03:03.463011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.083 [2024-07-23 09:03:03.463067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.083 [2024-07-23 09:03:03.463583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.083 [2024-07-23 09:03:03.464246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.083 [2024-07-23 09:03:03.464332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.083 [2024-07-23 09:03:03.464399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.083 [2024-07-23 09:03:03.472429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.083 [2024-07-23 09:03:03.481358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.083 [2024-07-23 09:03:03.482206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.083 [2024-07-23 09:03:03.482296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.083 [2024-07-23 09:03:03.482379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.083 [2024-07-23 09:03:03.482826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.083 [2024-07-23 09:03:03.483450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.083 [2024-07-23 09:03:03.483489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.083 [2024-07-23 09:03:03.483516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.083 [2024-07-23 09:03:03.491570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.083 [2024-07-23 09:03:03.500407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.083 [2024-07-23 09:03:03.501381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.083 [2024-07-23 09:03:03.501498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.083 [2024-07-23 09:03:03.501555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.083 [2024-07-23 09:03:03.502195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.084 [2024-07-23 09:03:03.502733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.084 [2024-07-23 09:03:03.502806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.084 [2024-07-23 09:03:03.502855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.084 [2024-07-23 09:03:03.510553] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:51.084 [2024-07-23 09:03:03.511162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.084 [2024-07-23 09:03:03.530446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.084 [2024-07-23 09:03:03.531458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.084 [2024-07-23 09:03:03.531551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.084 [2024-07-23 09:03:03.531608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.084 [2024-07-23 09:03:03.532250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.084 [2024-07-23 09:03:03.532788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.084 [2024-07-23 09:03:03.532859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.084 [2024-07-23 09:03:03.532908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.084 [2024-07-23 09:03:03.541068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.084 [2024-07-23 09:03:03.549215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.084 [2024-07-23 09:03:03.549923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.084 [2024-07-23 09:03:03.550015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.084 [2024-07-23 09:03:03.550073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.084 [2024-07-23 09:03:03.550596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.084 [2024-07-23 09:03:03.551242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.084 [2024-07-23 09:03:03.551328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.084 [2024-07-23 09:03:03.551383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.084 [2024-07-23 09:03:03.559515] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:51.084 [2024-07-23 09:03:03.560095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.084 [2024-07-23 09:03:03.578789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.084 [2024-07-23 09:03:03.579788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.084 [2024-07-23 09:03:03.579882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.084 [2024-07-23 09:03:03.579940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.084 [2024-07-23 09:03:03.580523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.084 [2024-07-23 09:03:03.581129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.084 [2024-07-23 09:03:03.581199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.084 [2024-07-23 09:03:03.581247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.084 [2024-07-23 09:03:03.589387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.084 [2024-07-23 09:03:03.599088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.084 [2024-07-23 09:03:03.599869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.084 [2024-07-23 09:03:03.599962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.084 [2024-07-23 09:03:03.600017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.084 [2024-07-23 09:03:03.600560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.084 [2024-07-23 09:03:03.601177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.084 [2024-07-23 09:03:03.601229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.084 [2024-07-23 09:03:03.601265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.344 [2024-07-23 09:03:03.609059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.344 [2024-07-23 09:03:03.618281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.344 [2024-07-23 09:03:03.619087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.344 [2024-07-23 09:03:03.619178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.344 [2024-07-23 09:03:03.619234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.344 [2024-07-23 09:03:03.619728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.344 [2024-07-23 09:03:03.620388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.344 [2024-07-23 09:03:03.620427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.620454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.628435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.636903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.637673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.637767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.637828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.638177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.638598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.638670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.638733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.646837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.656222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.656943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.657032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.657095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.657469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.657856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.657926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.657989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.665817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.675386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.676036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.676134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.676192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.676567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.677003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.677057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.677123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.685134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.694377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.695140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.695232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.695291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.695664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.696099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.696165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.696215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.704285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.713387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.714009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.714098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.714158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.714524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.714913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.714983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.715054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.723263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.732442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.733160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.733251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.733323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.733675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.734083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.734146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.734211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.742302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.751440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.752082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.752172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.752235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.752598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.753006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.753078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.753127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.761276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.770466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.771141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.771233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.771291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.771655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.772067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.772128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.345 [2024-07-23 09:03:03.772193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.345 [2024-07-23 09:03:03.780177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.345 [2024-07-23 09:03:03.789207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.345 [2024-07-23 09:03:03.790017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.345 [2024-07-23 09:03:03.790183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.345 [2024-07-23 09:03:03.790252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.345 [2024-07-23 09:03:03.790617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.345 [2024-07-23 09:03:03.791025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.345 [2024-07-23 09:03:03.791091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.346 [2024-07-23 09:03:03.791156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.346 [2024-07-23 09:03:03.798070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.346 [2024-07-23 09:03:03.807913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.346 [2024-07-23 09:03:03.808699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.346 [2024-07-23 09:03:03.808790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.346 [2024-07-23 09:03:03.808854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.346 [2024-07-23 09:03:03.809389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.346 [2024-07-23 09:03:03.809791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.346 [2024-07-23 09:03:03.809863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.346 [2024-07-23 09:03:03.809911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.346 [2024-07-23 09:03:03.818066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.346 [2024-07-23 09:03:03.826696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.346 [2024-07-23 09:03:03.827428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.346 [2024-07-23 09:03:03.827478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.346 [2024-07-23 09:03:03.827509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.346 [2024-07-23 09:03:03.827858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.346 [2024-07-23 09:03:03.828277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.346 [2024-07-23 09:03:03.828369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.346 [2024-07-23 09:03:03.828426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.346 [2024-07-23 09:03:03.836425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.346 [2024-07-23 09:03:03.845731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.346 [2024-07-23 09:03:03.846459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.346 [2024-07-23 09:03:03.846509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.346 [2024-07-23 09:03:03.846539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.346 [2024-07-23 09:03:03.846901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.346 [2024-07-23 09:03:03.847338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.346 [2024-07-23 09:03:03.847377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.346 [2024-07-23 09:03:03.847404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.346 [2024-07-23 09:03:03.855464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.346 [2024-07-23 09:03:03.864792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.606 [2024-07-23 09:03:03.865811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.606 [2024-07-23 09:03:03.865916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.606 [2024-07-23 09:03:03.865968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.606 [2024-07-23 09:03:03.866335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.606 [2024-07-23 09:03:03.866756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.606 [2024-07-23 09:03:03.866829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.866878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:03.874730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:03.883735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:03.884454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:03.884507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:03.884540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:03.884892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:03.885342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:03.885421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.885471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:03.893822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:03.902910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:03.903733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:03.903826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:03.903899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:03.904249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:03.904677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:03.904762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.904828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:03.912708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:03.921699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:03.922414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:03.922465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:03.922497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:03.922847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:03.923434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:03.923474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.923502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:03.931506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:03.940463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:03.941108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:03.941202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:03.941262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:03.941632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:03.942042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:03.942115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.942163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:03.950130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:03.959782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:03.960482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:03.960532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:03.960563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:03.960917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:03.961364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:03.961404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.961431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:03.969470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:03.978974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:03.979723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:03.979814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:03.979876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:03.980232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:03.980675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:03.980751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.980816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:03.988721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:03.997747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:03.998471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:03.998521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:03.998553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:03.998905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:03.999337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:03.999376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:03.999403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:04.007496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:04.016880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:04.017626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:04.017719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:04.017779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:04.018131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:04.018504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:04.018543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:04.018625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:04.026586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:04.036050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:04.036770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:04.036861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:04.036926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.607 [2024-07-23 09:03:04.037277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.607 [2024-07-23 09:03:04.037711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.607 [2024-07-23 09:03:04.037793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.607 [2024-07-23 09:03:04.037841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.607 [2024-07-23 09:03:04.045853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.607 [2024-07-23 09:03:04.054664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.607 [2024-07-23 09:03:04.055412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.607 [2024-07-23 09:03:04.055463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.607 [2024-07-23 09:03:04.055496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.608 [2024-07-23 09:03:04.055872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.608 [2024-07-23 09:03:04.056260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.608 [2024-07-23 09:03:04.056349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.608 [2024-07-23 09:03:04.056407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.608 [2024-07-23 09:03:04.064552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.608 [2024-07-23 09:03:04.073545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.608 [2024-07-23 09:03:04.074212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.608 [2024-07-23 09:03:04.074303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.608 [2024-07-23 09:03:04.074379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.608 [2024-07-23 09:03:04.074731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.608 [2024-07-23 09:03:04.075148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.608 [2024-07-23 09:03:04.075220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.608 [2024-07-23 09:03:04.075284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.608 [2024-07-23 09:03:04.083417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.608 [2024-07-23 09:03:04.092468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.608 [2024-07-23 09:03:04.093130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.608 [2024-07-23 09:03:04.093221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.608 [2024-07-23 09:03:04.093284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.608 [2024-07-23 09:03:04.093644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.608 [2024-07-23 09:03:04.094054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.608 [2024-07-23 09:03:04.094139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.608 [2024-07-23 09:03:04.094204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.608 [2024-07-23 09:03:04.102266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.608 [2024-07-23 09:03:04.111163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.608 [2024-07-23 09:03:04.111930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.608 [2024-07-23 09:03:04.112021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.608 [2024-07-23 09:03:04.112084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.608 [2024-07-23 09:03:04.112448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.608 [2024-07-23 09:03:04.112842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.608 [2024-07-23 09:03:04.112913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.608 [2024-07-23 09:03:04.112961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.608 [2024-07-23 09:03:04.120973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.868 [2024-07-23 09:03:04.130374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.868 [2024-07-23 09:03:04.131093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.868 [2024-07-23 09:03:04.131192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.868 [2024-07-23 09:03:04.131264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.868 [2024-07-23 09:03:04.131660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.868 [2024-07-23 09:03:04.132140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.868 [2024-07-23 09:03:04.132185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.868 [2024-07-23 09:03:04.132214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.868 [2024-07-23 09:03:04.140377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.868 [2024-07-23 09:03:04.149444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.868 [2024-07-23 09:03:04.150172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.868 [2024-07-23 09:03:04.150266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.868 [2024-07-23 09:03:04.150342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.868 [2024-07-23 09:03:04.150698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.868 [2024-07-23 09:03:04.151118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.868 [2024-07-23 09:03:04.151190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.868 [2024-07-23 09:03:04.151255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.868 [2024-07-23 09:03:04.159355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.868 [2024-07-23 09:03:04.168481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.868 [2024-07-23 09:03:04.169199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.868 [2024-07-23 09:03:04.169291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.868 [2024-07-23 09:03:04.169367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.868 [2024-07-23 09:03:04.169722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.868 [2024-07-23 09:03:04.170141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.868 [2024-07-23 09:03:04.170211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.868 [2024-07-23 09:03:04.170275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.868 [2024-07-23 09:03:04.178414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.868 [2024-07-23 09:03:04.187482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.868 [2024-07-23 09:03:04.188206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.868 [2024-07-23 09:03:04.188300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.868 [2024-07-23 09:03:04.188378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.188730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.189140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.189209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.189258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.197428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.206462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.207108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.207201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.207263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.207628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.208046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.208115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.208164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.216248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.225207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.225932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.226023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.226094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.226460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.226858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.226929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.226978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.234953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.244141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.244885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.244978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.245039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.245403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.245802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.245873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.245936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.253880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.262912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.263716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.263808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.263871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.264220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.264647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.264721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.264771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.272772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.281809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.282508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.282559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.282590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.282974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.283414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.283460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.283489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.291597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.301047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.301754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.301847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.301907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.302258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.302649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.302721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.302770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.310548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.319634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.320348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.320427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.320459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.320810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.321235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.321342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.321405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.329482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.338500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.339231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.339342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.339404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.339754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.340182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.340251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.340299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.348458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.357488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.869 [2024-07-23 09:03:04.358141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.869 [2024-07-23 09:03:04.358231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.869 [2024-07-23 09:03:04.358293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.869 [2024-07-23 09:03:04.358658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.869 [2024-07-23 09:03:04.359069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.869 [2024-07-23 09:03:04.359137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.869 [2024-07-23 09:03:04.359200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.869 [2024-07-23 09:03:04.367268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:51.869 [2024-07-23 09:03:04.376209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:51.870 [2024-07-23 09:03:04.376974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:51.870 [2024-07-23 09:03:04.377064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:51.870 [2024-07-23 09:03:04.377125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:51.870 [2024-07-23 09:03:04.377492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:51.870 [2024-07-23 09:03:04.377872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:51.870 [2024-07-23 09:03:04.377943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:51.870 [2024-07-23 09:03:04.378009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:51.870 [2024-07-23 09:03:04.385912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.395044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.395860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.395960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.396022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.396392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.396791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.396862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.396925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.404910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.413991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.414781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.414876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.414944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.415298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.415707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.415777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.415825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.423855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.433134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.433900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.433992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.434053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.434511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.434900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.434971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.435034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.443041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.451976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.452817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.452910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.452972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.453335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.453736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.453806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.453871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.461817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.470790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.471524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.471574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.471606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.471956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.472479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.472528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.472557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.480539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.489935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.490846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.490937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.491000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.491374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.491774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.491844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.491908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.499726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.508778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.509491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.509542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.509584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.509936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.510489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.510529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.510556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.518592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.527607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.528355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.528406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.528438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.528788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.529204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.529278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.529356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.537424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.546459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.547167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.547259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.547329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.130 [2024-07-23 09:03:04.547684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.130 [2024-07-23 09:03:04.548104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.130 [2024-07-23 09:03:04.548173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.130 [2024-07-23 09:03:04.548236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.130 [2024-07-23 09:03:04.556266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.130 [2024-07-23 09:03:04.564704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.130 [2024-07-23 09:03:04.565372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.130 [2024-07-23 09:03:04.565422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.130 [2024-07-23 09:03:04.565454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.131 [2024-07-23 09:03:04.565802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.131 [2024-07-23 09:03:04.566193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.131 [2024-07-23 09:03:04.566263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.131 [2024-07-23 09:03:04.566336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.131 [2024-07-23 09:03:04.574299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.131 [2024-07-23 09:03:04.583788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.131 [2024-07-23 09:03:04.584536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.131 [2024-07-23 09:03:04.584627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.131 [2024-07-23 09:03:04.584687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.131 [2024-07-23 09:03:04.585038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.131 [2024-07-23 09:03:04.585465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.131 [2024-07-23 09:03:04.585537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.131 [2024-07-23 09:03:04.585600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.131 [2024-07-23 09:03:04.594096] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:52.131 [2024-07-23 09:03:04.594519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.131 [2024-07-23 09:03:04.614072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.131 [2024-07-23 09:03:04.614881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.131 [2024-07-23 09:03:04.614990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.131 [2024-07-23 09:03:04.615049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.131 [2024-07-23 09:03:04.615419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.131 [2024-07-23 09:03:04.615857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.131 [2024-07-23 09:03:04.615928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.131 [2024-07-23 09:03:04.615978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.131 [2024-07-23 09:03:04.624516] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:52.131 [2024-07-23 09:03:04.624904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.131 [2024-07-23 09:03:04.644697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.131 [2024-07-23 09:03:04.645416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.131 [2024-07-23 09:03:04.645485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.131 [2024-07-23 09:03:04.645536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.131 [2024-07-23 09:03:04.645982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.131 [2024-07-23 09:03:04.646485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.131 [2024-07-23 09:03:04.646528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.131 [2024-07-23 09:03:04.646556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.654163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.391 [2024-07-23 09:03:04.663873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.391 [2024-07-23 09:03:04.664657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.391 [2024-07-23 09:03:04.664757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.391 [2024-07-23 09:03:04.664815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.391 [2024-07-23 09:03:04.665167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.391 [2024-07-23 09:03:04.665594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.391 [2024-07-23 09:03:04.665667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.391 [2024-07-23 09:03:04.665729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.673623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.391 [2024-07-23 09:03:04.682620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.391 [2024-07-23 09:03:04.683361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.391 [2024-07-23 09:03:04.683434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.391 [2024-07-23 09:03:04.683466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.391 [2024-07-23 09:03:04.683823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.391 [2024-07-23 09:03:04.684424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.391 [2024-07-23 09:03:04.684464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.391 [2024-07-23 09:03:04.684491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.693062] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:52.391 [2024-07-23 09:03:04.693464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.391 [2024-07-23 09:03:04.713207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.391 [2024-07-23 09:03:04.713908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.391 [2024-07-23 09:03:04.714002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.391 [2024-07-23 09:03:04.714059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.391 [2024-07-23 09:03:04.714426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.391 [2024-07-23 09:03:04.714820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.391 [2024-07-23 09:03:04.714892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.391 [2024-07-23 09:03:04.714953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.722890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.391 [2024-07-23 09:03:04.732624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.391 [2024-07-23 09:03:04.733398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.391 [2024-07-23 09:03:04.733491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.391 [2024-07-23 09:03:04.733550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.391 [2024-07-23 09:03:04.733901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.391 [2024-07-23 09:03:04.734324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.391 [2024-07-23 09:03:04.734363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.391 [2024-07-23 09:03:04.734391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.742455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.391 [2024-07-23 09:03:04.751461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.391 [2024-07-23 09:03:04.752108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.391 [2024-07-23 09:03:04.752199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.391 [2024-07-23 09:03:04.752255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.391 [2024-07-23 09:03:04.752618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.391 [2024-07-23 09:03:04.753049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.391 [2024-07-23 09:03:04.753120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.391 [2024-07-23 09:03:04.753182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.761237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.391 [2024-07-23 09:03:04.770250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.391 [2024-07-23 09:03:04.771007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.391 [2024-07-23 09:03:04.771098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.391 [2024-07-23 09:03:04.771159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.391 [2024-07-23 09:03:04.771525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.391 [2024-07-23 09:03:04.771915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.391 [2024-07-23 09:03:04.771986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.391 [2024-07-23 09:03:04.772047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.780563] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:52.391 [2024-07-23 09:03:04.780950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.391 [2024-07-23 09:03:04.800269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.391 [2024-07-23 09:03:04.801054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.391 [2024-07-23 09:03:04.801144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.391 [2024-07-23 09:03:04.801204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.391 [2024-07-23 09:03:04.801570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.391 [2024-07-23 09:03:04.801961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.391 [2024-07-23 09:03:04.802032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.391 [2024-07-23 09:03:04.802095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.391 [2024-07-23 09:03:04.810119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.392 [2024-07-23 09:03:04.819160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.392 [2024-07-23 09:03:04.819879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.392 [2024-07-23 09:03:04.819970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.392 [2024-07-23 09:03:04.820028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.392 [2024-07-23 09:03:04.820390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.392 [2024-07-23 09:03:04.820787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.392 [2024-07-23 09:03:04.820859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.392 [2024-07-23 09:03:04.820929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.392 [2024-07-23 09:03:04.828857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.392 [2024-07-23 09:03:04.838039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.392 [2024-07-23 09:03:04.838773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.392 [2024-07-23 09:03:04.838864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.392 [2024-07-23 09:03:04.838928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.392 [2024-07-23 09:03:04.839276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.392 [2024-07-23 09:03:04.839705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.392 [2024-07-23 09:03:04.839772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.392 [2024-07-23 09:03:04.839833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.392 [2024-07-23 09:03:04.847762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.392 [2024-07-23 09:03:04.856749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.392 [2024-07-23 09:03:04.857475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.392 [2024-07-23 09:03:04.857526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.392 [2024-07-23 09:03:04.857557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.392 [2024-07-23 09:03:04.857904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.392 [2024-07-23 09:03:04.858464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.392 [2024-07-23 09:03:04.858504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.392 [2024-07-23 09:03:04.858531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.392 [2024-07-23 09:03:04.866518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.392 [2024-07-23 09:03:04.875898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.392 [2024-07-23 09:03:04.876623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.392 [2024-07-23 09:03:04.876714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.392 [2024-07-23 09:03:04.876775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.392 [2024-07-23 09:03:04.877125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.392 [2024-07-23 09:03:04.877557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.392 [2024-07-23 09:03:04.877629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.392 [2024-07-23 09:03:04.877693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.392 [2024-07-23 09:03:04.885578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.392 [2024-07-23 09:03:04.895130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.392 [2024-07-23 09:03:04.895935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.392 [2024-07-23 09:03:04.896027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.392 [2024-07-23 09:03:04.896089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.392 [2024-07-23 09:03:04.896453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.392 [2024-07-23 09:03:04.896843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.392 [2024-07-23 09:03:04.896937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.392 [2024-07-23 09:03:04.896999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.392 [2024-07-23 09:03:04.904922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:04.913571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:04.914230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:04.914345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:04.914400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:04.914749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:04.915144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:04.915216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:04.915278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:04.923420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:04.932417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:04.933114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:04.933208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:04.933267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:04.933632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:04.934047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:04.934114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:04.934177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:04.942185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:04.951330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:04.952060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:04.952153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:04.952209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:04.952581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:04.953004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:04.953061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:04.953125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:04.961102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:04.970228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:04.971004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:04.971096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:04.971159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:04.971524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:04.971908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:04.971978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:04.972026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:04.979956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:04.989046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:04.989714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:04.989809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:04.989869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:04.990393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:04.990788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:04.990859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:04.990922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:04.998984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:05.007629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:05.008211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:05.008260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:05.008291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:05.008651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:05.009055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:05.009126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:05.009188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:05.016838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:05.025502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:05.026411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:05.026463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:05.026495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:05.026890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:05.027481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:05.027521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:05.027548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:05.035752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:05.044520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:05.045412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:05.045463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:05.045494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.653 [2024-07-23 09:03:05.046092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.653 [2024-07-23 09:03:05.046671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.653 [2024-07-23 09:03:05.046754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.653 [2024-07-23 09:03:05.046803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.653 [2024-07-23 09:03:05.055069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.653 [2024-07-23 09:03:05.063394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.653 [2024-07-23 09:03:05.064211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.653 [2024-07-23 09:03:05.064302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.653 [2024-07-23 09:03:05.064396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.654 [2024-07-23 09:03:05.064744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.654 [2024-07-23 09:03:05.065170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.654 [2024-07-23 09:03:05.065240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.654 [2024-07-23 09:03:05.065288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.654 [2024-07-23 09:03:05.072251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.654 [2024-07-23 09:03:05.080827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.654 [2024-07-23 09:03:05.081602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.654 [2024-07-23 09:03:05.081653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.654 [2024-07-23 09:03:05.081685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.654 [2024-07-23 09:03:05.082095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.654 [2024-07-23 09:03:05.082659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.654 [2024-07-23 09:03:05.082699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.654 [2024-07-23 09:03:05.082726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.654 [2024-07-23 09:03:05.089257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.654 [2024-07-23 09:03:05.098371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.654 [2024-07-23 09:03:05.099246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.654 [2024-07-23 09:03:05.099353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.654 [2024-07-23 09:03:05.099408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.654 [2024-07-23 09:03:05.099756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.654 [2024-07-23 09:03:05.100388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.654 [2024-07-23 09:03:05.100427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.654 [2024-07-23 09:03:05.100453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.654 [2024-07-23 09:03:05.107175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.654 [2024-07-23 09:03:05.115566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.654 [2024-07-23 09:03:05.116238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.654 [2024-07-23 09:03:05.116344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.654 [2024-07-23 09:03:05.116409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.654 [2024-07-23 09:03:05.116955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.654 [2024-07-23 09:03:05.117522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.654 [2024-07-23 09:03:05.117561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.654 [2024-07-23 09:03:05.117588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.654 [2024-07-23 09:03:05.124391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.654 [2024-07-23 09:03:05.132935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.654 [2024-07-23 09:03:05.133539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.654 [2024-07-23 09:03:05.133589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.654 [2024-07-23 09:03:05.133621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.654 [2024-07-23 09:03:05.133979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.654 [2024-07-23 09:03:05.134349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.654 [2024-07-23 09:03:05.134388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.654 [2024-07-23 09:03:05.134414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.654 [2024-07-23 09:03:05.140200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.654 [2024-07-23 09:03:05.150522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.654 [2024-07-23 09:03:05.151190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.654 [2024-07-23 09:03:05.151281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.654 [2024-07-23 09:03:05.151372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.654 [2024-07-23 09:03:05.151725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.654 [2024-07-23 09:03:05.152171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.654 [2024-07-23 09:03:05.152210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.654 [2024-07-23 09:03:05.152236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.654 [2024-07-23 09:03:05.158998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.654 [2024-07-23 09:03:05.167588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.654 [2024-07-23 09:03:05.168597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.654 [2024-07-23 09:03:05.168665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.654 [2024-07-23 09:03:05.168719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.654 [2024-07-23 09:03:05.169266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.654 [2024-07-23 09:03:05.169849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.654 [2024-07-23 09:03:05.169920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.654 [2024-07-23 09:03:05.169969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.915 [2024-07-23 09:03:05.178208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.915 [2024-07-23 09:03:05.186860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.915 [2024-07-23 09:03:05.187734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.915 [2024-07-23 09:03:05.187831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.915 [2024-07-23 09:03:05.187890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.915 [2024-07-23 09:03:05.188483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.915 [2024-07-23 09:03:05.189083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.915 [2024-07-23 09:03:05.189154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.915 [2024-07-23 09:03:05.189218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.915 [2024-07-23 09:03:05.197422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.915 [2024-07-23 09:03:05.206208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.915 [2024-07-23 09:03:05.207055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.915 [2024-07-23 09:03:05.207150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.915 [2024-07-23 09:03:05.207208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.915 [2024-07-23 09:03:05.207741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.915 [2024-07-23 09:03:05.208399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.915 [2024-07-23 09:03:05.208438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.915 [2024-07-23 09:03:05.208465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.915 [2024-07-23 09:03:05.215383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.915 [2024-07-23 09:03:05.224196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.915 [2024-07-23 09:03:05.225055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.915 [2024-07-23 09:03:05.225149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.915 [2024-07-23 09:03:05.225206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.915 [2024-07-23 09:03:05.225733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.915 [2024-07-23 09:03:05.226397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.915 [2024-07-23 09:03:05.226437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.915 [2024-07-23 09:03:05.226464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.915 [2024-07-23 09:03:05.234520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2548211 Killed "${NVMF_APP[@]}" "$@" 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:52.915 [2024-07-23 09:03:05.242486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2549653 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2549653 00:49:52.915 [2024-07-23 09:03:05.243089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.915 [2024-07-23 09:03:05.243140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.915 [2024-07-23 09:03:05.243178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2549653 ']' 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:52.915 [2024-07-23 09:03:05.243547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:52.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:52.915 [2024-07-23 09:03:05.244210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:49:52.915 [2024-07-23 09:03:05.244282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.915 [2024-07-23 09:03:05.244367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.915 09:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:52.915 [2024-07-23 09:03:05.249741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.915 [2024-07-23 09:03:05.258148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.915 [2024-07-23 09:03:05.258777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.915 [2024-07-23 09:03:05.258827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.915 [2024-07-23 09:03:05.258859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.915 [2024-07-23 09:03:05.259211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.915 [2024-07-23 09:03:05.259581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.915 [2024-07-23 09:03:05.259621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.915 [2024-07-23 09:03:05.259647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.915 [2024-07-23 09:03:05.264754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.915 [2024-07-23 09:03:05.273734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.915 [2024-07-23 09:03:05.274335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.915 [2024-07-23 09:03:05.274386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.915 [2024-07-23 09:03:05.274417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.915 [2024-07-23 09:03:05.274770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.915 [2024-07-23 09:03:05.275126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.915 [2024-07-23 09:03:05.275164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.915 [2024-07-23 09:03:05.275190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.915 [2024-07-23 09:03:05.280287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.915 [2024-07-23 09:03:05.289289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.289918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.289968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.289999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.290364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.290721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.290759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.290784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.295893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.304887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.305480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.305530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.305561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.305913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.306268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.306307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.306348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.311471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.320526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.321134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.321184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.321215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.321856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.322216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.322254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.322281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.327424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.336176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.336824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.336874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.336913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.337273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.337639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.337677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.337703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.342808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.351802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.352402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.352451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.352481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.352832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.353186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.353223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.353248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.358357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.367397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.368035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.368082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.368112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.368477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.368832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.368868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.368894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.374019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.383116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.383756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.383806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.383838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.384190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.384562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.384607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.384635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.389788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.398934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.399586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.399636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.399669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.400027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.400403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.400442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.400469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.405646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.414735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.415343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.415392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.415425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.415783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.416143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.916 [2024-07-23 09:03:05.416180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.916 [2024-07-23 09:03:05.416207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:52.916 [2024-07-23 09:03:05.421362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:52.916 [2024-07-23 09:03:05.430443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:52.916 [2024-07-23 09:03:05.431189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:52.916 [2024-07-23 09:03:05.431242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:52.916 [2024-07-23 09:03:05.431277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:52.916 [2024-07-23 09:03:05.431648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:52.916 [2024-07-23 09:03:05.432010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:52.917 [2024-07-23 09:03:05.432049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:52.917 [2024-07-23 09:03:05.432078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.176 [2024-07-23 09:03:05.437501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.176 [2024-07-23 09:03:05.440117] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:49:53.177 [2024-07-23 09:03:05.440434] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:53.177 [2024-07-23 09:03:05.446114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.446739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.446793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.446828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.447183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.447556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.447598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.447625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.452757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.461838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.462471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.462522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.462554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.462911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.463271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.463321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.463352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.468512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.477665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.478244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.478293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.478338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.478698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.479059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.479097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.479124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.484292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.493476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.494031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.494081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.494112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.494482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.494849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.494892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.494920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.500098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.509254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.509890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.509954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.509988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.510354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.510733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.510772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.510799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.515917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.524943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.525507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.525557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.525594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.525945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.526300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.526361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.526389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.531514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.540541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.541158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.541206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.541246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.541612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.541967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.542004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.542031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.547128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.556199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.556806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.556855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.556887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.557240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.557610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.557649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.557684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.562806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.571859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.572477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.572527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.572560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.177 [2024-07-23 09:03:05.572914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.177 [2024-07-23 09:03:05.573269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.177 [2024-07-23 09:03:05.573307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.177 [2024-07-23 09:03:05.573346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.177 [2024-07-23 09:03:05.578481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.177 [2024-07-23 09:03:05.587531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.177 [2024-07-23 09:03:05.588112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.177 [2024-07-23 09:03:05.588159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.177 [2024-07-23 09:03:05.588191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.178 [2024-07-23 09:03:05.588561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.178 [2024-07-23 09:03:05.588918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.178 [2024-07-23 09:03:05.588962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.178 [2024-07-23 09:03:05.588992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.178 [2024-07-23 09:03:05.594099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.178 [2024-07-23 09:03:05.603103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.178 [2024-07-23 09:03:05.603692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.178 [2024-07-23 09:03:05.603740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.178 [2024-07-23 09:03:05.603771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.178 [2024-07-23 09:03:05.604126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.178 [2024-07-23 09:03:05.604494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.178 [2024-07-23 09:03:05.604533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.178 [2024-07-23 09:03:05.604561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.178 [2024-07-23 09:03:05.610422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.178 [2024-07-23 09:03:05.622052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.178 [2024-07-23 09:03:05.622856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.178 [2024-07-23 09:03:05.622946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.178 [2024-07-23 09:03:05.623049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.178 [2024-07-23 09:03:05.623570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.178 [2024-07-23 09:03:05.624216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.178 [2024-07-23 09:03:05.624285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.178 [2024-07-23 09:03:05.624364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.178 [2024-07-23 09:03:05.632457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.178 [2024-07-23 09:03:05.641150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.178 [2024-07-23 09:03:05.641930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.178 [2024-07-23 09:03:05.642020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.178 [2024-07-23 09:03:05.642079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.178 [2024-07-23 09:03:05.642586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.178 EAL: No free 2048 kB hugepages reported on node 1 00:49:53.178 [2024-07-23 09:03:05.643243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.178 [2024-07-23 09:03:05.643328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.178 [2024-07-23 09:03:05.643382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.178 [2024-07-23 09:03:05.651509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.178 [2024-07-23 09:03:05.660192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.178 [2024-07-23 09:03:05.661038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.178 [2024-07-23 09:03:05.661127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.178 [2024-07-23 09:03:05.661183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.178 [2024-07-23 09:03:05.661717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.178 [2024-07-23 09:03:05.662394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.178 [2024-07-23 09:03:05.662433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.178 [2024-07-23 09:03:05.662461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.178 [2024-07-23 09:03:05.670173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.178 [2024-07-23 09:03:05.675945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.178 [2024-07-23 09:03:05.676538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.178 [2024-07-23 09:03:05.676586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.178 [2024-07-23 09:03:05.676618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.178 [2024-07-23 09:03:05.676970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.178 [2024-07-23 09:03:05.677339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.178 [2024-07-23 09:03:05.677377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.178 [2024-07-23 09:03:05.677404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.178 [2024-07-23 09:03:05.682545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.178 [2024-07-23 09:03:05.691573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.178 [2024-07-23 09:03:05.692205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.178 [2024-07-23 09:03:05.692252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.178 [2024-07-23 09:03:05.692283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.178 [2024-07-23 09:03:05.692680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.178 [2024-07-23 09:03:05.693154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.178 [2024-07-23 09:03:05.693198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.178 [2024-07-23 09:03:05.693225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.437 [2024-07-23 09:03:05.698620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.707166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.707841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.707894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.707935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.708290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.708665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.708703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.708729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.713846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.722866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.723455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.723505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.723537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.723891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.724245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.724283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.724321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.729462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.738529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.739095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.739145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.739177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.739558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.739916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.739954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.739983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.742868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:53.438 [2024-07-23 09:03:05.745101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.754237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.754890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.754943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.754979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.755356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.755730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.755769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.755800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.761008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.769875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.770580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.770634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.770669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.771029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.771407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.771447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.771477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.776651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.785483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.786110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.786160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.786193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.786571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.786933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.786972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.787000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.792172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.801028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.801632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.801681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.801714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.802071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.802450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.802489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.802524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.807698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.816802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.817440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.817488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.817521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.438 [2024-07-23 09:03:05.817878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.438 [2024-07-23 09:03:05.818235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.438 [2024-07-23 09:03:05.818273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.438 [2024-07-23 09:03:05.818302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.438 [2024-07-23 09:03:05.823484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.438 [2024-07-23 09:03:05.832588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.438 [2024-07-23 09:03:05.833235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.438 [2024-07-23 09:03:05.833286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.438 [2024-07-23 09:03:05.833334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.697 [2024-07-23 09:03:06.063341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:53.697 [2024-07-23 09:03:06.063429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:53.697 [2024-07-23 09:03:06.063471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:53.697 [2024-07-23 09:03:06.063498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:53.697 [2024-07-23 09:03:06.063525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:53.697 [2024-07-23 09:03:06.063675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:49:53.697 [2024-07-23 09:03:06.063730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:49:53.697 [2024-07-23 09:03:06.063742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:49:53.957 [2024-07-23 09:03:06.314374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.957 [2024-07-23 09:03:06.314486] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:49:53.957 [2024-07-23 09:03:06.314869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.957 [2024-07-23 09:03:06.314912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.957 [2024-07-23 09:03:06.314945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.957 [2024-07-23 09:03:06.320418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.957 [2024-07-23 09:03:06.329862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.957 [2024-07-23 09:03:06.330510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.957 [2024-07-23 09:03:06.330570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.957 [2024-07-23 09:03:06.330617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.957 [2024-07-23 09:03:06.331000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.957 [2024-07-23 09:03:06.331380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.957 [2024-07-23 09:03:06.331421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.957 [2024-07-23 09:03:06.331448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.957 [2024-07-23 09:03:06.336611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.957 [2024-07-23 09:03:06.345405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.957 [2024-07-23 09:03:06.345943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.957 [2024-07-23 09:03:06.345995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.957 [2024-07-23 09:03:06.346027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.957 [2024-07-23 09:03:06.346399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.957 [2024-07-23 09:03:06.346758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.957 [2024-07-23 09:03:06.346797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.957 [2024-07-23 09:03:06.346824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.957 [2024-07-23 09:03:06.351994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.957 [2024-07-23 09:03:06.361209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.957 [2024-07-23 09:03:06.361971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.957 [2024-07-23 09:03:06.362036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.957 [2024-07-23 09:03:06.362074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.957 [2024-07-23 09:03:06.362458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.957 [2024-07-23 09:03:06.362828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.957 [2024-07-23 09:03:06.362869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.957 [2024-07-23 09:03:06.362901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.957 [2024-07-23 09:03:06.368121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.957 [2024-07-23 09:03:06.376997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.957 [2024-07-23 09:03:06.377663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.957 [2024-07-23 09:03:06.377723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.957 [2024-07-23 09:03:06.377760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.957 [2024-07-23 09:03:06.378129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.957 [2024-07-23 09:03:06.378513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.957 [2024-07-23 09:03:06.378563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.957 [2024-07-23 09:03:06.378597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.957 [2024-07-23 09:03:06.383794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.957 [2024-07-23 09:03:06.392679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.957 [2024-07-23 09:03:06.393360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.957 [2024-07-23 09:03:06.393424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.957 [2024-07-23 09:03:06.393456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.957 [2024-07-23 09:03:06.393831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.957 [2024-07-23 09:03:06.394218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.957 [2024-07-23 09:03:06.394256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.957 [2024-07-23 09:03:06.394283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.957 [2024-07-23 09:03:06.399691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.957 [2024-07-23 09:03:06.408582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.957 [2024-07-23 09:03:06.409186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.957 [2024-07-23 09:03:06.409236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.957 [2024-07-23 09:03:06.409275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.957 [2024-07-23 09:03:06.409654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.957 [2024-07-23 09:03:06.410015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.957 [2024-07-23 09:03:06.410062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.957 [2024-07-23 09:03:06.410089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.958 [2024-07-23 09:03:06.415386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.958 [2024-07-23 09:03:06.424132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.958 [2024-07-23 09:03:06.424723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.958 [2024-07-23 09:03:06.424773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.958 [2024-07-23 09:03:06.424804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.958 [2024-07-23 09:03:06.425156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.958 [2024-07-23 09:03:06.425529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.958 [2024-07-23 09:03:06.425569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.958 [2024-07-23 09:03:06.425595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.958 [2024-07-23 09:03:06.430735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.958 [2024-07-23 09:03:06.439784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.958 [2024-07-23 09:03:06.440422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.958 [2024-07-23 09:03:06.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.958 [2024-07-23 09:03:06.440505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.958 [2024-07-23 09:03:06.440860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.958 [2024-07-23 09:03:06.441218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.958 [2024-07-23 09:03:06.441256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.958 [2024-07-23 09:03:06.441282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.958 [2024-07-23 09:03:06.446478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.958 [2024-07-23 09:03:06.455483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.958 [2024-07-23 09:03:06.456109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.958 [2024-07-23 09:03:06.456160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.958 [2024-07-23 09:03:06.456191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.958 [2024-07-23 09:03:06.456559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.958 [2024-07-23 09:03:06.456916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.958 [2024-07-23 09:03:06.456954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.958 [2024-07-23 09:03:06.456980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:53.958 [2024-07-23 09:03:06.462096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:53.958 [2024-07-23 09:03:06.471082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:53.958 [2024-07-23 09:03:06.471718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:53.958 [2024-07-23 09:03:06.471769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:53.958 [2024-07-23 09:03:06.471801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:53.958 [2024-07-23 09:03:06.472153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:53.958 [2024-07-23 09:03:06.472528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:53.958 [2024-07-23 09:03:06.472570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:53.958 [2024-07-23 09:03:06.472598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.218 [2024-07-23 09:03:06.478146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.218 [2024-07-23 09:03:06.486760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.218 [2024-07-23 09:03:06.487482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.218 [2024-07-23 09:03:06.487538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.218 [2024-07-23 09:03:06.487581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.218 [2024-07-23 09:03:06.487946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.218 [2024-07-23 09:03:06.488321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.218 [2024-07-23 09:03:06.488361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.218 [2024-07-23 09:03:06.488389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.218 [2024-07-23 09:03:06.493567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.218 [2024-07-23 09:03:06.502417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.218 [2024-07-23 09:03:06.503040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.218 [2024-07-23 09:03:06.503091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.218 [2024-07-23 09:03:06.503124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.218 [2024-07-23 09:03:06.503495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.218 [2024-07-23 09:03:06.503856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.218 [2024-07-23 09:03:06.503895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.218 [2024-07-23 09:03:06.503923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.218 [2024-07-23 09:03:06.509094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.218 [2024-07-23 09:03:06.517966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.218 [2024-07-23 09:03:06.518517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.218 [2024-07-23 09:03:06.518569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.218 [2024-07-23 09:03:06.518609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.218 [2024-07-23 09:03:06.518965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.218 [2024-07-23 09:03:06.519350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.218 [2024-07-23 09:03:06.519390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.218 [2024-07-23 09:03:06.519416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.218 [2024-07-23 09:03:06.524641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.218 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:49:54.218 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:49:54.218 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:54.218 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:54.219 [2024-07-23 09:03:06.533902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.534441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.534493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.534533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.534902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.535266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.535305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.535350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 [2024-07-23 09:03:06.540584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 [2024-07-23 09:03:06.549556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.550111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.550161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.550193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.550569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.550940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.551005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.551032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 [2024-07-23 09:03:06.556361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:54.219 [2024-07-23 09:03:06.562881] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:54.219 [2024-07-23 09:03:06.565243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.565864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.565914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.565945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.566324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.566700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.566738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.566765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 [2024-07-23 09:03:06.574639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 [2024-07-23 09:03:06.582045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.582672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.582722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.582754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.583106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.583514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.583554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.583587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.219 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:54.219 [2024-07-23 09:03:06.588758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 [2024-07-23 09:03:06.597684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.598389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.598453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.598491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.598864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.599237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.599278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.599318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 [2024-07-23 09:03:06.604558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 [2024-07-23 09:03:06.613446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.614213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.614274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.614336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.614709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.615077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.615118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.615151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 [2024-07-23 09:03:06.620367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 [2024-07-23 09:03:06.629278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.629881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.629931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.629962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.630334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.630714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.630755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.630782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 [2024-07-23 09:03:06.635994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 [2024-07-23 09:03:06.644894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.645480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.645529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.645560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.645925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.646288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.646340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.219 [2024-07-23 09:03:06.646378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.219 [2024-07-23 09:03:06.651603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.219 [2024-07-23 09:03:06.660465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.219 [2024-07-23 09:03:06.661091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.219 [2024-07-23 09:03:06.661140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.219 [2024-07-23 09:03:06.661172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.219 [2024-07-23 09:03:06.661538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.219 [2024-07-23 09:03:06.661896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.219 [2024-07-23 09:03:06.661935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.220 [2024-07-23 09:03:06.661961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.220 [2024-07-23 09:03:06.667091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.220 [2024-07-23 09:03:06.676124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.220 [2024-07-23 09:03:06.676754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.220 [2024-07-23 09:03:06.676804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.220 [2024-07-23 09:03:06.676836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.220 [2024-07-23 09:03:06.677200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.220 [2024-07-23 09:03:06.677573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.220 [2024-07-23 09:03:06.677612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.220 [2024-07-23 09:03:06.677640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.220 [2024-07-23 09:03:06.682764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.220 Malloc0 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:54.220 [2024-07-23 09:03:06.691803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.220 [2024-07-23 09:03:06.692448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:54.220 [2024-07-23 09:03:06.692499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:49:54.220 [2024-07-23 09:03:06.692531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:49:54.220 [2024-07-23 09:03:06.692883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:49:54.220 [2024-07-23 09:03:06.693239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:49:54.220 [2024-07-23 09:03:06.693278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:49:54.220 [2024-07-23 09:03:06.693305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:54.220 [2024-07-23 09:03:06.698426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:54.220 [2024-07-23 09:03:06.705601] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:54.220 [2024-07-23 09:03:06.707523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:54.220 09:03:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2548880 00:49:54.478 [2024-07-23 09:03:06.890048] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:50:04.457 00:50:04.457 Latency(us) 00:50:04.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:04.457 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:50:04.457 Verification LBA range: start 0x0 length 0x4000 00:50:04.457 Nvme1n1 : 15.02 3224.31 12.59 5076.32 0.00 15375.65 1468.49 487782.02 00:50:04.457 =================================================================================================================== 00:50:04.457 Total : 3224.31 12.59 5076.32 0.00 15375.65 1468.49 487782.02 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:50:04.457 rmmod nvme_tcp 00:50:04.457 rmmod nvme_fabrics 00:50:04.457 rmmod nvme_keyring 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2549653 ']' 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2549653 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2549653 ']' 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2549653 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2549653 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2549653' 00:50:04.457 killing process with pid 2549653 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2549653 00:50:04.457 09:03:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2549653 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:06.361 09:03:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:50:08.903 00:50:08.903 real 0m31.137s 00:50:08.903 user 1m21.753s 00:50:08.903 sys 0m6.676s 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:50:08.903 ************************************ 00:50:08.903 END TEST nvmf_bdevperf 00:50:08.903 ************************************ 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:08.903 ************************************ 00:50:08.903 START TEST nvmf_target_disconnect 00:50:08.903 ************************************ 00:50:08.903 09:03:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:50:08.903 * Looking for test storage... 00:50:08.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:50:08.903 09:03:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:50:12.196 Found 0000:84:00.0 (0x8086 - 0x159b) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:50:12.196 Found 0000:84:00.1 (0x8086 - 0x159b) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:50:12.196 Found net devices under 0000:84:00.0: cvl_0_0 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:50:12.196 Found net devices under 0000:84:00.1: cvl_0_1 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:50:12.196 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:50:12.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:12.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:50:12.197 00:50:12.197 --- 10.0.0.2 ping statistics --- 00:50:12.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:12.197 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:50:12.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:12.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:50:12.197 00:50:12.197 --- 10.0.0.1 ping statistics --- 00:50:12.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:12.197 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:50:12.197 ************************************ 00:50:12.197 START TEST nvmf_target_disconnect_tc1 00:50:12.197 ************************************ 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:50:12.197 09:03:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:50:12.456 EAL: No free 2048 kB hugepages reported on node 1 00:50:12.715 [2024-07-23 09:03:25.006278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:12.715 [2024-07-23 09:03:25.006515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7280 with addr=10.0.0.2, port=4420 00:50:12.715 [2024-07-23 09:03:25.006718] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:50:12.715 [2024-07-23 09:03:25.006796] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:50:12.715 [2024-07-23 09:03:25.006852] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:50:12.715 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:50:12.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:50:12.715 Initializing NVMe Controllers 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:50:12.715 00:50:12.715 real 0m0.476s 00:50:12.715 user 0m0.209s 00:50:12.715 sys 0m0.262s 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:50:12.715 ************************************ 00:50:12.715 END TEST nvmf_target_disconnect_tc1 00:50:12.715 ************************************ 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:50:12.715 ************************************ 00:50:12.715 START TEST nvmf_target_disconnect_tc2 00:50:12.715 ************************************ 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2553719 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2553719 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2553719 ']' 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:12.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:50:12.715 09:03:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:12.974 [2024-07-23 09:03:25.286721] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:50:12.974 [2024-07-23 09:03:25.286905] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:12.974 EAL: No free 2048 kB hugepages reported on node 1 00:50:12.974 [2024-07-23 09:03:25.479722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:50:13.544 [2024-07-23 09:03:25.976445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:13.544 [2024-07-23 09:03:25.976574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:13.544 [2024-07-23 09:03:25.976638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:13.544 [2024-07-23 09:03:25.976685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:13.544 [2024-07-23 09:03:25.976731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:13.544 [2024-07-23 09:03:25.976990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:50:13.544 [2024-07-23 09:03:25.977096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:50:13.544 [2024-07-23 09:03:25.977192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:50:13.544 [2024-07-23 09:03:25.977212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.484 09:03:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:14.744 Malloc0 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:14.744 [2024-07-23 09:03:27.026702] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:14.744 [2024-07-23 09:03:27.081649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2553917 00:50:14.744 09:03:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:50:14.744 EAL: No free 2048 kB hugepages reported on node 1 00:50:16.651 09:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2553719 00:50:16.651 09:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:50:16.651 Read completed with error (sct=0, sc=8) 00:50:16.651 starting I/O failed 00:50:16.651 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 [2024-07-23 09:03:29.129621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 [2024-07-23 09:03:29.131014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 [2024-07-23 09:03:29.132402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Read completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.652 Write completed with error (sct=0, sc=8) 00:50:16.652 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Write completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 Read completed with error (sct=0, sc=8) 00:50:16.653 starting I/O failed 00:50:16.653 [2024-07-23 09:03:29.133173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:50:16.653 [2024-07-23 09:03:29.133425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.133507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.133734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.133824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.134113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.134199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.134490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.134537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.134858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.134942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.135285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.135388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.135665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.135763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.136125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.136210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.136541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.136646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.136956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.137001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.137373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.137420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.137657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.137740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.138054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.138138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.138462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.138509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.138849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.138933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.139293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.139393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.139667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.139751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.140099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.140183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.140483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.140530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.140876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.140961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.141330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.141399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.141576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.141622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.141951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.142036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.142377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.142423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.142699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.142802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.143134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.143217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.143545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.143610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.143893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.143939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.144228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.144325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.653 qpair failed and we were unable to recover it. 00:50:16.653 [2024-07-23 09:03:29.144557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.653 [2024-07-23 09:03:29.144648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.144959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.145006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.145385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.145438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.145755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.145840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.146189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.146273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.146548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.146641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.146959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.147043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.147419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.147521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.147836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.147920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.148232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.148330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.148640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.148686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.148991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.149075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.149420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.149505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.149793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.149839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.150106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.150190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.150575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.150659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.151007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.151074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.151372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.151466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.151828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.151912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.152212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.152258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.152606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.152690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.153010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.153094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.153430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.153509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.153827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.153910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.154276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.154389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.154737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.154824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.155174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.155257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.155584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.155669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.155979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.156025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.156439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.156524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.156850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.156934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.157265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.157349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.157668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.157752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.158109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.158193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.158504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.158566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.158918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.654 [2024-07-23 09:03:29.159002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.654 qpair failed and we were unable to recover it. 00:50:16.654 [2024-07-23 09:03:29.159380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.159466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.159824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.159903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.160217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.160301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.160627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.160712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.161011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.161057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.161392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.161477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.161785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.161881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.162232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.162349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.162670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.162753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.163114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.163198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.163463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.163510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.163817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.163900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.164212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.164295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.164669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.164715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.164997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.165080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.165390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.165475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.165819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.165890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.166259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.166358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.166690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.166773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.167120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.167207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.167533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.167579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.167831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.167915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.168252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.168297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.168563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.168647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.168990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.169073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.169433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.169524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.655 [2024-07-23 09:03:29.169900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.655 [2024-07-23 09:03:29.169984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.655 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.170298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.170415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.170749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.170795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.171031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.171115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.171421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.171506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.171747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.171793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.172017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.172100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.172376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.172461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.172805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.172884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.173226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.173326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.173611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.173694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.174008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.174054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.174377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.174424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.924 [2024-07-23 09:03:29.174717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.924 [2024-07-23 09:03:29.174799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.924 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.175168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.175252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.175642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.175750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.176053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.176137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.176443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.176490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.176809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.176891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.177268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.177366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.177707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.177758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.178064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.178146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.178466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.178552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.178884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.178930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.179289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.179383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.179743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.179826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.180066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.180113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.180395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.180482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.180779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.180862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.181144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.181189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.181533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.181618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.181986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.182071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.182385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.182432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.182763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.182846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.183164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.183248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.183581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.183662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.183974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.184085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.184404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.184490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.184840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.184920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.185272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.185370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.185709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.185793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.186065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.186110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.186390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.186475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.186847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.186930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.187229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.187275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.187611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.187695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.188054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.188137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.188502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.188590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.188935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.189019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.925 [2024-07-23 09:03:29.189393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.925 [2024-07-23 09:03:29.189478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.925 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.189787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.189833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.190163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.190247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.190617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.190702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.191044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.191113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.191456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.191542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.191902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.191987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.192286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.192342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.192664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.192749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.193031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.193115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.193421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.193468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.193786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.193880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.194179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.194263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.194627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.194696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.195014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.195098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.195471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.195555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.195804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.195850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.196094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.196176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.196579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.196663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.196948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.196994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.197259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.197364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.197694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.197776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.198123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.198208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.198615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.198701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.199048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.199132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.199490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.199581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.199898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.199982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.200359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.200444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.200749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.200795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.201199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.201287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.201652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.201737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.202086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.202162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.202507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.202593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.202945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.203028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.203412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.203499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.203812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.203896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.204247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.926 [2024-07-23 09:03:29.204347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.926 qpair failed and we were unable to recover it. 00:50:16.926 [2024-07-23 09:03:29.204674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.204720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.205101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.205185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.205527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.205612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.205966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.206046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.206336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.206421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.206782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.206868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.207217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.207288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.207593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.207677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.208027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.208111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.208428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.208475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.208827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.208910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.209254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.209352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.209664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.209709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.210065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.210171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.210549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.210668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.211024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.211105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.211440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.211526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.211894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.211978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.212323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.212421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.212774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.212858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.213176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.213260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.213625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.213702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.213975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.214058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.214357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.214443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.214783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.214854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.215155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.215239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.215582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.215668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.215966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.216012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.216343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.216428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.216786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.216870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.217209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.217289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.217619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.217703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.217986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.218071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.218431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.927 [2024-07-23 09:03:29.218527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.927 qpair failed and we were unable to recover it. 00:50:16.927 [2024-07-23 09:03:29.218904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.218988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.219359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.219443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.219782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.219855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.220182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.220265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.220598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.220682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.221056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.221154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.221523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.221608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.221954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.222050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.222409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.222490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.222845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.222928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.223274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.223376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.223686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.223731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.224052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.224136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.224450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.224535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.224905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.225007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.225388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.225473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.225830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.225912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.226260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.226380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.226733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.226816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.227192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.227276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.227641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.227737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.228104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.228187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.228503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.228587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.228936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.229031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.229401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.229486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.229842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.229925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.230268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.230381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.230706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.230790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.231175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.231258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.231619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.231698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.232016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.232100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.232488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.232573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.232921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.233011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.233332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.233418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.233756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.233839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.234177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.928 [2024-07-23 09:03:29.234251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.928 qpair failed and we were unable to recover it. 00:50:16.928 [2024-07-23 09:03:29.234616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.234701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.235032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.235115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.235424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.235470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.235855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.235939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.236247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.236346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.236717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.236804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.237151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.237233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.237639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.237723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.238039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.238117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.238451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.238535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.238890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.238973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.239344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.239441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.239811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.239894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.240244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.240343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.240694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.240774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.241139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.241247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.241581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.241665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.242060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.242145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.242499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.242584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.242918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.243001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.243305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.243386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.243724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.243807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.244190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.244274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.244654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.244741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.245091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.245184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.245534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.245618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.245947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.246020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.246354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.246439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.246812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.246896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.929 [2024-07-23 09:03:29.247237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.929 [2024-07-23 09:03:29.247324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.929 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.247681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.247764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.248111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.248194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.248568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.248663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.248988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.249071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.249326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.249411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.249771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.249859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.250225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.250334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.250703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.250788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.251110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.251155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.251449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.251536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.251913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.251998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.252297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.252351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.252719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.252828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.253159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.253241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.253589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.253673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.254031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.254114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.254428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.254512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.254881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.254974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.255344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.255429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.255791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.255873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.256233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.256347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.256702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.256787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.257146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.257228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.257602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.257698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.258053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.258136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.258529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.258615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.258968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.259057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.259416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.259500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.259864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.259946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.260296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.260350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.260714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.260799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.261153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.261236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.261539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.261585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.261959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.262042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.262398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.262499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.930 [2024-07-23 09:03:29.262848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.930 [2024-07-23 09:03:29.262940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.930 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.263270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.263381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.263755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.263839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.264189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.264273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.264606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.264690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.265059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.265142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.265545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.265632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.266008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.266162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.266494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.266580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.266933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.267021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.267397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.267482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.267837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.267920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.268209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.268255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.268592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.268675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.269027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.269110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.269435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.269505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.269853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.269936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.270290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.270405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.270753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.270848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.271218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.271302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.271683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.271766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.272109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.272187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.272594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.272693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.273035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.273118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.273447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.273494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.273862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.273945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.274331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.274416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.274771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.274856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.275208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.275291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.275642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.275725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.276091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.276190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.276533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.276579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.276963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.277047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.277420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.277521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.277909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.277993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.278376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.278461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.278762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.931 [2024-07-23 09:03:29.278808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.931 qpair failed and we were unable to recover it. 00:50:16.931 [2024-07-23 09:03:29.279114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.279213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.279614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.279698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.280008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.280059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.280383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.280468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.280791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.280874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.281236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.281345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.281669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.281751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.282105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.282189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.282557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.282645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.283038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.283121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.283454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.283540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.283904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.284006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.284355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.284439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.284800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.284883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.285199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.285244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.285595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.285685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.286053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.286135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.286488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.286582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.286914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.286998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.287358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.287443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.287778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.287862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.288219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.288302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.288650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.289036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.289082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.289444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.289527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.289872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.289955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.290287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.290342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.290665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.290748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.291064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.291164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.291513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.291587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.291945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.292028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.292382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.292468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.292810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.292856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.293236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.293335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.293687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.293795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.932 qpair failed and we were unable to recover it. 00:50:16.932 [2024-07-23 09:03:29.294149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.932 [2024-07-23 09:03:29.294238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.294628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.294714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.295085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.295168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.295524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.295569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.295922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.296004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.296371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.296457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.296832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.296927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.297296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.297407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.297719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.297803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.298117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.298163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.298534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.298619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.298996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.299078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.299421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.299501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.299814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.299897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.300228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.300330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.300686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.300762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.301109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.301192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.301557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.301638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.301944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.301988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.302350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.302439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.302796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.302881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.303221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.303294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.303655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.303740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.304098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.304182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.304535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.304615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.304973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.305057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.305427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.305514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.305859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.305944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.306301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.306403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.306712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.306797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.307141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.307217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.307565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.307641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.308006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.308091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.308430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.308506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.308856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.308941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.309264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.309366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.933 [2024-07-23 09:03:29.309714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.933 [2024-07-23 09:03:29.309793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.933 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.310143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.310228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.310622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.310707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.311053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.311146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.311511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.311597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.311975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.312059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.312400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.312477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.312841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.312927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.313271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.313371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.313674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.313721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.314088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.314172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.314521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.314618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.314944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.314990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.315355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.315441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.315808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.315893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.316196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.316251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.316630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.316717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.317066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.317150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.317495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.317576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.317931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.318015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.318362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.318448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.318788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.318863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.319228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.319328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.319697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.319782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.320120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.320195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.320540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.320587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.320965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.321050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.321400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.321506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.321869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.321953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.322276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.322393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.322705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.934 [2024-07-23 09:03:29.322752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.934 qpair failed and we were unable to recover it. 00:50:16.934 [2024-07-23 09:03:29.323034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.323118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.323472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.323557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.323904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.323986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.324349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.324434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.324754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.324839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.325135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.325183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.325547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.325632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.326005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.326091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.326436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.326515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.326867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.326952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.327297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.327398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.327743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.327823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.328140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.328225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.328584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.328670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.329027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.329114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.329488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.329573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.329942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.330028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.330373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.330453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.330805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.330888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.331223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.331326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.331671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.331746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.332121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.332205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.332489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.332572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.332916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.332996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.333345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.333431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.333751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.333835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.334173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.334244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.334611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.334708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.335053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.335138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.335476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.335554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.335899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.335983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.336346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.336431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.336770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.336844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.337202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.337286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.337639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.337724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.338064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.338141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.935 qpair failed and we were unable to recover it. 00:50:16.935 [2024-07-23 09:03:29.338495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.935 [2024-07-23 09:03:29.338580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.338941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.339024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.339333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.339380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.339751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.339836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.340183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.340266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.340625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.340704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.341057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.341142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.341486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.341571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.341917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.341995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.342364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.342450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.342807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.342893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.343239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.343349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.343675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.343759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.344065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.344150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.344496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.344579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.344907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.344991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.345345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.345431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.345739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.345787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.346145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.346229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.346584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.346671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.347020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.347099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.347449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.347534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.347878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.347963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.348322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.348413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.348777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.348886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.349253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.349357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.349673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.349719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.350067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.350151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.350503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.350589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.350940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.351017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.351374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.351460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.351801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.351887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.352223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.352298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.352672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.352756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.353105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.353189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.353524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.353571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.936 [2024-07-23 09:03:29.353935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.936 [2024-07-23 09:03:29.354019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.936 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.354350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.354436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.354815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.354900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.355260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.355365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.355724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.355807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.356102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.356148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.356498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.356583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.356947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.357031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.357338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.357386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.357719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.357804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.358157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.358240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.358631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.358720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.359067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.359150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.359458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.359506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.359788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.359884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.360248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.360363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.360720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.360803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.361140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.361214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.361555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.361626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.362003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.362086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.362425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.362500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.362876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.362959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.363297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.363398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.363703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.363749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.364082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.364165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.364529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.364615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.364922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.364968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.365344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.365429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.365732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.365816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.366146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.366193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.366586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.366682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.367048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.367132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.367473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.367550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.367904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.367987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.368346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.368431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.368775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.368860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.369211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.937 [2024-07-23 09:03:29.369294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.937 qpair failed and we were unable to recover it. 00:50:16.937 [2024-07-23 09:03:29.369666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.369749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.370100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.370180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.370576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.370661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.370991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.371075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.371366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.371415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.371729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.371813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.372183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.372267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.372631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.372715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.373067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.373150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.373497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.373583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.373942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.374035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.374387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.374473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.374830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.374914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.375217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.375264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.375655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.375740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.376096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.376205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.376535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.376582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.376941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.377025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.377386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.377482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.377824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.377901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.378254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.378356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.378671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.378754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.379101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.379179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.379538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.379625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.379977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.380060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.380411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.380498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.380838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.380923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.381266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.381370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.381688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.381735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.381999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.382083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.382454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.382540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.382825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.382872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.383214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.383299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.383646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.383729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.384047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.384130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.384445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.938 [2024-07-23 09:03:29.384527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.938 qpair failed and we were unable to recover it. 00:50:16.938 [2024-07-23 09:03:29.384889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.384972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.385307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.385406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.385776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.385858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.386199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.386277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.386584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.386667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.386947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.387029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.387404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.387490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.387827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.387891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.388164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.388247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.388646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.388730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.389065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.389150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.389464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.389511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.389799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.389884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.390192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.390276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.390621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.390706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.391035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.391082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.391291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.391398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.391750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.391834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.392189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.392272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.392634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.392730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.393084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.393169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.393519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.393604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.393946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.394040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.394355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.394402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.394636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.394719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.395031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.395122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.395490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.395576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.395919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.395998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.396348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.396451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.396795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.396880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.939 [2024-07-23 09:03:29.397203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.939 [2024-07-23 09:03:29.397288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.939 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.397656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.397733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.398078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.398161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.398481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.398567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.398880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.398965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.399272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.399329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.399663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.399748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.400098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.400183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.400521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.400607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.400916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.400992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.401332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.401417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.401739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.401822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.402195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.402281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.402642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.402763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.403106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.403190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.403516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.403586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.403930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.404014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.404306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.404373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.404617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.404710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.405047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.405131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.405471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.405559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.405899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.405986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.406345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.406430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.406741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.406833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.407166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.407249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.407600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.407666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.407997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.408081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.408405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.408491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.408856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.408941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.409300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.409408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.409714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.409797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.410126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.410210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.410530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.410627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.410943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.410989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.411166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.411254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.411546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.411630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.411886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.940 [2024-07-23 09:03:29.411979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.940 qpair failed and we were unable to recover it. 00:50:16.940 [2024-07-23 09:03:29.412246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.412293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.412562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.412646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.412974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.413058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.413427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.413513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.413868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.413915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.414192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.414239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.414431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.414479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.414705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.414751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.415011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.415058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.415288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.415355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.415505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.415552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.415829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.415876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.416140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.416187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.416419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.416468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.416686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.416733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.416916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.417007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.417373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.417420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.417636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.417720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.418042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.418106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.418410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.418457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.418712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.418797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.419180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.419264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.419561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.419608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.419937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.420021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.420391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.420439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.420630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.420714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.421046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.421130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.421450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.421498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.421782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.421875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.422214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.422298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.422533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.422624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.422968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.423014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.423336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.423410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.423611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.423696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.424003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.424087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.424410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.941 [2024-07-23 09:03:29.424463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.941 qpair failed and we were unable to recover it. 00:50:16.941 [2024-07-23 09:03:29.424698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.424783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.425124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.425208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.425515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.425562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.425930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.426020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.426377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.426437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.426655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.426741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.427071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.427156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.427464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.427511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.427811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.427896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.428247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.428354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.428625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.428724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.429040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.429125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.429422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.429470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.429726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.429795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.430093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.430177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.430543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.430591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.430882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.430967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.431249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.431381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.431644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.431691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.431938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.432022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.432386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.432433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.432728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.432813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.433118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.433202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.433529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.433576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.433848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.433946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.434234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.434298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.434572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.434618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.434854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.434938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.435238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.435342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.435668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:16.942 [2024-07-23 09:03:29.435715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:16.942 qpair failed and we were unable to recover it. 00:50:16.942 [2024-07-23 09:03:29.436057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.213 [2024-07-23 09:03:29.436105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.213 qpair failed and we were unable to recover it. 00:50:17.213 [2024-07-23 09:03:29.436404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.213 [2024-07-23 09:03:29.436453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.213 qpair failed and we were unable to recover it. 00:50:17.213 [2024-07-23 09:03:29.436747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.213 [2024-07-23 09:03:29.436831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.213 qpair failed and we were unable to recover it. 00:50:17.213 [2024-07-23 09:03:29.437083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.213 [2024-07-23 09:03:29.437147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.213 qpair failed and we were unable to recover it. 00:50:17.213 [2024-07-23 09:03:29.437413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.213 [2024-07-23 09:03:29.437462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.213 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.437743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.437792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.438069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.438133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.438451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.438498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.438738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.438785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.438982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.439035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.439207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.439254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.439464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.439511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.439716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.439763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.440033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.440116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.440382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.440429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.440595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.440641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.440843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.440889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.441158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.441241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.441505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.441551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.441843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.441927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.442203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.442286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.442550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.442597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.442875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.442961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.443203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.443257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.443468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.443515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.443823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.443870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.444092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.444176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.444496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.444543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.444854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.444938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.445202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.445287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.445540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.445587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.445886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.445970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.446285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.446389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.446653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.446735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.447016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.447099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.447379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.447428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.447661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.447747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.448040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.214 [2024-07-23 09:03:29.448124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.214 qpair failed and we were unable to recover it. 00:50:17.214 [2024-07-23 09:03:29.448362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.448422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.448580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.448626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.448807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.448893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.449267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.449383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.449630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.449677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.449993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.450078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.450420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.450468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.450721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.450806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.451098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.451182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.451456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.451505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.451884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.451970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.452246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.452371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.452557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.452603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.452798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.452883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.453172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.453256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.453515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.453561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.455369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.455423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.455661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.455709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.455981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.456066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.456387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.456435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.456711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.456759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.457051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.457138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.457426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.457474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.457715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.457801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.458146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.458230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.458587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.458673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.458987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.459081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.459411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.459459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.459738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.459834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.460177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.460260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.460596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.460683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.461020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.461106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.461423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.461471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.461730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.461815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.462107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.215 [2024-07-23 09:03:29.462192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.215 qpair failed and we were unable to recover it. 00:50:17.215 [2024-07-23 09:03:29.462568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.462615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.462892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.462939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.463198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.463284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.463584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.463670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.463984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.464031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.464260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.464307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.464497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.464544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.464715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.464762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.465012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.465101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.465430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.465477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.465743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.465843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.466091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.466155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.466438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.466486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.466686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.466750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.467051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.467138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.467453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.467501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.467737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.467791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.468093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.468178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.468502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.468553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.468911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.468994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.469369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.469416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.469640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.469724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.470032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.470117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.470437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.470484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.470707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.470754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.470987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.471080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.471391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.471439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.471673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.471765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.472130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.472215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.472501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.472562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.472861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.472946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.473205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.473300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.473577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.473625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.473915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.473999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.474359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.474406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.474545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.216 [2024-07-23 09:03:29.474639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.216 qpair failed and we were unable to recover it. 00:50:17.216 [2024-07-23 09:03:29.474944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.475028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.475381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.475429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.475668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.475755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.476062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.476148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.476483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.476531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.476848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.476932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.477254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.477356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.477625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.477672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.477878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.477925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.478259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.478376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.478644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.478728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.479041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.479106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.479369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.479418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.479709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.479756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.479986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.480034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.480343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.480409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.480667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.480715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.480954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.481040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.481286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.481341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.481516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.481563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.481795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.481890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.482114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.482199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.482479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.482527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.482818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.482903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.483181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.483264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.483553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.483601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.483873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.483920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.484136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.484183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.484342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.484390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.484602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.484648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.484867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.484952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.485327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.485405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.485637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.485684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.485929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.485976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.486223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.486326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.486570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.217 [2024-07-23 09:03:29.486655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.217 qpair failed and we were unable to recover it. 00:50:17.217 [2024-07-23 09:03:29.487003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.487087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.487360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.487408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.487562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.487609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.487812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.487896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.488184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.488230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.488423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.488477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.488688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.488735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.489031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.489116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.489406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.489453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.489700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.489794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.490029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.490077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.490386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.490457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.490714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.490799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.491180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.491265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.491508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.491566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.491780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.491827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.492081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.492165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.492532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.492579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.492862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.492909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.493126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.493212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.493553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.493600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.493952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.494037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.494353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.494400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.494685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.494731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.495023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.495119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.495446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.495494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.495762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.495810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.496070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.218 [2024-07-23 09:03:29.496153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.218 qpair failed and we were unable to recover it. 00:50:17.218 [2024-07-23 09:03:29.496482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.496529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.496886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.496965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.497293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.497393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.497657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.497703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.498015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.498100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.498448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.498496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.498756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.498840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.499116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.499163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.499435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.499483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.499820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.499905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.500275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.500331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.500604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.500650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.500943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.501026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.501372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.501420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.501695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.501742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.502048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.502132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.502429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.502475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.502725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.502809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.503125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.503208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.503569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.503661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.504016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.504097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.504455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.504503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.504799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.504883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.505222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.505307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.505583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.505630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.505897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.505979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.506333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.506406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.506660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.506744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.507050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.507133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.507425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.507482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.507791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.507874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.508200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.508283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.508552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.508609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.508886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.508970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.509264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.509378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.509560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.219 [2024-07-23 09:03:29.509645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.219 qpair failed and we were unable to recover it. 00:50:17.219 [2024-07-23 09:03:29.509946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.509998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.510388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.510435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.510752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.510895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.511253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.511382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.511790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.511914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.512331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.512418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.512843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.512969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.513390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.513441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.513669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.513715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.513929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.513997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.514258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.514338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.514615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.514685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.514927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.514993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.515273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.515350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.515622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.515691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.516005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.516073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.516359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.516406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.516636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.516681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.516885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.516949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.517150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.517214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.517401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.517449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.517675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.517747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.518049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.518095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.518381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.518428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.518594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.518670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.518912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.518958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.519242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.519287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.519491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.519559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.519827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.519916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.520212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.520297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.520566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.520652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.520990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.521074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.521386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.521432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.521606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.521689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.522073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.522162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.220 [2024-07-23 09:03:29.522474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.220 [2024-07-23 09:03:29.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.220 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.522834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.522917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.523183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.523266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.523527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.523583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.523953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.524046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.524374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.524428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.524640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.524724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.525091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.525175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.525449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.525496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.525736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.525819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.526078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.526160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.526463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.526511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.526787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.526850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.527193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.527281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.527548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.527628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.527942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.528025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.528398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.528446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.528617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.528700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.528982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.529065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.529424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.529471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.529735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.529824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.530086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.530170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.530443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.530490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.530781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.530866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.531142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.531225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.531479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.531525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.531811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.531894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.532268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.532380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.532538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.532583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.532768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.532852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.533183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.533266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.533501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.533547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.533840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.533892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.534142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.534224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.534467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.534513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.221 [2024-07-23 09:03:29.534752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.221 [2024-07-23 09:03:29.534842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.221 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.535160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.535244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.535508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.535576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.535891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.535975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.536283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.536404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.536562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.536615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.536924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.537009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.537377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.537424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.537646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.537729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.538007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.538053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.538271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.538372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.538755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.538838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.539167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.539252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.539517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.539564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.539863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.539946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.540260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.540370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.540663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.540747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.541088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.541164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.541438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.541522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.541870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.541954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.542341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.542428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.542674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.542720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.542910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.542993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.543226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.543325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.543580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.543663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.543914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.543960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.544182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.544266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.544528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.544621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.544962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.545046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.545401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.545513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.545875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.545959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.546304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.546406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.546701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.546785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.547069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.547114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.547327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.222 [2024-07-23 09:03:29.547411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.222 qpair failed and we were unable to recover it. 00:50:17.222 [2024-07-23 09:03:29.547680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.547763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.548150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.548235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.548502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.548554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.548816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.548900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.549223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.549323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.549616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.549680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.549988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.550034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.550400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.550471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.550759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.550841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.551149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.551246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.551482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.551528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.551742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.551824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.552080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.552163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.552462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.552550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.552831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.552878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.553107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.553189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.553489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.553536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.553792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.553875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.554185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.554231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.554481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.554564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.554929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.555011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.555379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.555463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.555780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.555826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.556153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.556236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.556468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.556514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.556719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.556813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.557103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.557150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.557399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.557482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.557747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.557847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.558218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.558301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.223 qpair failed and we were unable to recover it. 00:50:17.223 [2024-07-23 09:03:29.558574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.223 [2024-07-23 09:03:29.558628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.558885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.558969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.559259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.559362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.559609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.559692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.559946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.559992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.560327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.560412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.560755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.560838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.561181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.561264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.561527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.561574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.561833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.561916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.562276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.562390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.562710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.562795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.563101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.563152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.563461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.563545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.563827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.563909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.564272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.564373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.564585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.564631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.564846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.564928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.565236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.565354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.565608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.565692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.565965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.566010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.566295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.566407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.566756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.566839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.567117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.567200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.567439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.567486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.567692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.567777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.568011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.568099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.568491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.568579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.568889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.568965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.569326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.569378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.569563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.569645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.569927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.569983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.570226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.570272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.570527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.570605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.570896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.570945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.571220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.224 [2024-07-23 09:03:29.571284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.224 qpair failed and we were unable to recover it. 00:50:17.224 [2024-07-23 09:03:29.571492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.571539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.571822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.571870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.572160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.572236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.572448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.572495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.572766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.572812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.573077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.573141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.573396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.573443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.573631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.573694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.573916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.573961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.574233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.574279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.574453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.574499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.574745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.574808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.575025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.575071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.575329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.575375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.575519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.575575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.575855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.575926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.576190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.576260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.576462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.576508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.576800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.576867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.577140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.577187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.577422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.577486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.577693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.577758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.577951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.578016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.578275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.578328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.578513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.578575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.578820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.578885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.579183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.579230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.579444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.579508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.579696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.579767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.580031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.580095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.580351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.580404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.580605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.580651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.580843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.580907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.581184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.581230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.581434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.225 [2024-07-23 09:03:29.581499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.225 qpair failed and we were unable to recover it. 00:50:17.225 [2024-07-23 09:03:29.581798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.581864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.582144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.582215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.582416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.582480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.582714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.582778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.583064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.583111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.583297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.583351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.583576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.583641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.583938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.584022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.584259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.584303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.584507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.584577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.584835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.584900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.585187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.585257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.585447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.585511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.585753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.585817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.586078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.586144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.586395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.586460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.586653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.586717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.586908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.586973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.587169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.587214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.587376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.587447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.587655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.587709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.588001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.588078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.588324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.588371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.588550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.588622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.588835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.588902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.589162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.589226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.590504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.590558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.590876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.590947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.591231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.591278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.591531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.591607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.591828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.591891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.592075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.592138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.592412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.592479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.592679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.592725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.592921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.592985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.593151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.226 [2024-07-23 09:03:29.593196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.226 qpair failed and we were unable to recover it. 00:50:17.226 [2024-07-23 09:03:29.593424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.593470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.593699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.593745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.594003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.594067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.594253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.594298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.594486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.594549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.594757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.594821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.595049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.595112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.595399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.595465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.595775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.595821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.596069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.596133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.596328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.596385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.596613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.596659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.596934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.597006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.597266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.597322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.597513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.597587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.597859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.597920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.598217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.598282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.598498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.598561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.598753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.598818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.599010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.599075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.599255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.599299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.599573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:50:17.227 [2024-07-23 09:03:29.600105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.600228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.600521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.600571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.600953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.601041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.601322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.601380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.601542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.601593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.601756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.601802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.602092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.602140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.602399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.602447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.602592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.602640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.602979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.603062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.603400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.603446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.603656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.603741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.604041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.227 [2024-07-23 09:03:29.604124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.227 qpair failed and we were unable to recover it. 00:50:17.227 [2024-07-23 09:03:29.604397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.604444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.604629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.604715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.605076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.605159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.605452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.605498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.605792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.605876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.606209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.606293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.606505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.606552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.606810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.606893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.607269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.607384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.607735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.607820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.608132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.608216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.608456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.608502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.608664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.608710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.608946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.609029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.609325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.609410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.609568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.609620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.609809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.609891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.610135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.610228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.610476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.610522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.610751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.610834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.611142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.611225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.611471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.611518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.611690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.611778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.612127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.612219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.612467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.612513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.612827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.612910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.613188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.613270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.613492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.613538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.613728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.613811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.228 [2024-07-23 09:03:29.614112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.228 [2024-07-23 09:03:29.614194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.228 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.614429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.614475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.614687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.614771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.615087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.615169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.615438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.615485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.615769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.615852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.616185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.616267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.616511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.616558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.616757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.616840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.617098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.617180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.617427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.617475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.617666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.617749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.618088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.618173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.618433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.618479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.618711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.618794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.619167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.619251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.619532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.619579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.619926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.620037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.620429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.620476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.620687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.620734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.621064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.621149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.621386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.621432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.621581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.621627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.621831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.621914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.622223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.622306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.622527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.622586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.622874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.622957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.623304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.623401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.623608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.623660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.624012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.624095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.624427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.624474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.624683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.624730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.625093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.625177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.625465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.625512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.625702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.625749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.626058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.229 [2024-07-23 09:03:29.626141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.229 qpair failed and we were unable to recover it. 00:50:17.229 [2024-07-23 09:03:29.626403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.626450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.626678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.626725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.627056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.627138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.627430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.627477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.627681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.627726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.628023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.628106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.628419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.628466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.628644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.628690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.629029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.629117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.629411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.629458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.629638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.629684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.629916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.629999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.630275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.630386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.630568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.630614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.630904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.630988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.631296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.631391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.631554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.631612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.631905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.631990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.632297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.632395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.632539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.632591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.632908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.632991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.633254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.633363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.633517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.633562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.633795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.633878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.634211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.634294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.634535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.634592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.634859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.634942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.635230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.635329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.635567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.635613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.635823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.635907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.636238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.636338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.636613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.636658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.636896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.636942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.637156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.637202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.637429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.637476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.637688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.230 [2024-07-23 09:03:29.637734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.230 qpair failed and we were unable to recover it. 00:50:17.230 [2024-07-23 09:03:29.637950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.637996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.638236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.638283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.638475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.638520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.638877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.638960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.639289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.639396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.639585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.639668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.640020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.640112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.640395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.640442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.640691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.640775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.641107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.641216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.641496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.641542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.641719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.641774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.641980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.642064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.642404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.642451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.642670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.642753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.643045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.643128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.643433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.643480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.643789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.643872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.644178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.644261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.644487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.644533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.644803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.644885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.645211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.645293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.645534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.645587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.645872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.645966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.646300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.646390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.646555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.646600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.646827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.646910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.647255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.647348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.647524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.647570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.647865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.647948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.648270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.648383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.648536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.648582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.648829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.648899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.649253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.649357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.649568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.649625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.231 [2024-07-23 09:03:29.649926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.231 [2024-07-23 09:03:29.650008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.231 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.650372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.650419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.650583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.650629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.650858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.650941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.651286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.651384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.651564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.651611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.651882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.651965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.652274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.652386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.652541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.652599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.652894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.652978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.653265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.653366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.653665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.653712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.653993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.654076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.654413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.654500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.654791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.654837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.655071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.655117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.655453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.655537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.655857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.655920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.656230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.656327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.656646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.656729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.657071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.657145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.657501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.657587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.657896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.657978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.658295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.658350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.658636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.658733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.659081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.659164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.659451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.659498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.659803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.659886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.660231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.660360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.660677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.660724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.661070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.661153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.661488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.661572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.661846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.661892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.662172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.662257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.662640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.232 [2024-07-23 09:03:29.662724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.232 qpair failed and we were unable to recover it. 00:50:17.232 [2024-07-23 09:03:29.663038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.663085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.663342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.663427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.663804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.663887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.664187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.664276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.664564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.664649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.664966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.665049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.665329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.665375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.665620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.665708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.665970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.666054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.666326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.666373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.666730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.666814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.667097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.667181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.667484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.667531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.667729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.667812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.668131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.668214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.668509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.668556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.668817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.668862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.669171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.669254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.669561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.669639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.669952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.670036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.670398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.670446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.670689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.670735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.671115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.671198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.671538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.671585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.671745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.671790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.672006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.672051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.672373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.672460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.672759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.672806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.672985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.673031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.673267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.673364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.233 [2024-07-23 09:03:29.673732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.233 [2024-07-23 09:03:29.673779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.233 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.674038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.674115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.674389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.674474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.674750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.674805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.675047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.675130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.675482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.675568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.675861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.675926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.676231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.676329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.676652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.676734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.677015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.677061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.677406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.677470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.677787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.677870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.678200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.678290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.678620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.678704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.679033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.679116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.679455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.679556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.679933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.680018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.680377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.680462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.680762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.680810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.681080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.681162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.681500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.681548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.681883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.681930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.682186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.682269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.682595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.682679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.682984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.683031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.683239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.683285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.683613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.683697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.684031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.684128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.684486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.684572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.684918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.685001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.685330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.685378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.685740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.685823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.686161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.686245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.686572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.686655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.686952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.687034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.234 [2024-07-23 09:03:29.687387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.234 [2024-07-23 09:03:29.687472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.234 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.687858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.687905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.688097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.688154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.688457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.688544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.688874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.688944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.689300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.689404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.689759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.689843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.690145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.690191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.690544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.690638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.690959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.691042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.691357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.691405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.691722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.691805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.692092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.692174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.692511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.692605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.692948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.693031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.693385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.693470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.693784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.693856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.694178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.694261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.694580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.694663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.694977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.695024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.695305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.695407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.695782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.695864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.696183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.696230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.696655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.696781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.697173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.697264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.697612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.697661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.698033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.698102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.698442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.698528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.698831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.698879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.699212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.699297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.699679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.699762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.700062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.700129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.700437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.700522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.700881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.700965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.701215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.701259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.235 [2024-07-23 09:03:29.701530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.235 [2024-07-23 09:03:29.701616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.235 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.701952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.702036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.702351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.702399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.702691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.702775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.703115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.703198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.703560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.703607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.703853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.703900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.704188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.704272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.704638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.704685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.704911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.705002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.705354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.705438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.705784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.705865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.706221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.706304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.706646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.706741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.707087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.707165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.707536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.707621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.707979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.708063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.708393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.708473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.708840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.708924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.709269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.709373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.709704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.709774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.710130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.710212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.710570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.710648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.710952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.710998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.711377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.711462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.711815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.711898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.712249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.712353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.712737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.712820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.713170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.713254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.713555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.713600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.713889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.713973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.714292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.714422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.714779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.714852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.715160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.715244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.715574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.715658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.715952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.715998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.236 [2024-07-23 09:03:29.716334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.236 [2024-07-23 09:03:29.716421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.236 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.716718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.716801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.717140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.717216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.717562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.717646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.718003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.718084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.718414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.718461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.718818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.718900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.719251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.719327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.719656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.719701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.720084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.720166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.720550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.720614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.720912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.720991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.721324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.721407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.721723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.721806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.722117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.722164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.722577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.722641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.237 [2024-07-23 09:03:29.722960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.237 [2024-07-23 09:03:29.723043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.237 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.723349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.723402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.723744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.723828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.724174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.724237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.724558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.724605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.724786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.724832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.725114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.725160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.725405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.725451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.725719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.725804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.506 [2024-07-23 09:03:29.726157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.506 [2024-07-23 09:03:29.726239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.506 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.726587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.726677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.726995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.727077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.727408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.727493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.727830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.727903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.728269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.728370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.728694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.728777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.729112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.729184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.729556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.729646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.730009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.730101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.730452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.730534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.730845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.730928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.731271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.731385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.731726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.731802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.732146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.732228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.732566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.732638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.732941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.732986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.733349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.733435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.733737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.733820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.734169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.734247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.734617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.734701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.735056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.735140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.735473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.735544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.735894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.735977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.736345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.736430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.736790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.736880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.737223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.737306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.737638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.737722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.738022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.738068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.738439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.738524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.738839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.738921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.739269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.739373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.739730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.739823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.740193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.740276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.740640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.740744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.741100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.507 [2024-07-23 09:03:29.741184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.507 qpair failed and we were unable to recover it. 00:50:17.507 [2024-07-23 09:03:29.741540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.741624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.741924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.741971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.742332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.742416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.742771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.742854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.743199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.743276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.743676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.743759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.744075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.744159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.744507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.744594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.744951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.745034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.745397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.745483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.745836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.745914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.746271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.746371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.746720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.746801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.747116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.747162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.747542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.747627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.747971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.748054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.748391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.748462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.748835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.748918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.749295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.749399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.749744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.749815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.750117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.750199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.750579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.750652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.750984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.751029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.751386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.751433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.751753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.751836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.752147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.752193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.752544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.752628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.752934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.753017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.753301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.753356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.753657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.753740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.754083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.754166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.754482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.754529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.754890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.754973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.755348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.755431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.755780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.755860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.508 [2024-07-23 09:03:29.756225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.508 [2024-07-23 09:03:29.756306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.508 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.756682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.756776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.757135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.757223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.757592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.757675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.758044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.758128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.758461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.758534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.758865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.758948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.759301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.759410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.759708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.759754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.760063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.760145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.760502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.760566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.760835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.760881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.761230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.761301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.761637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.761720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.762025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.762072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.762357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.762442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.762792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.762875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.763226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.763343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.763717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.763800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.764147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.764229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.764591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.764684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.764996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.765078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.765393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.765457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.765770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.765850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.766190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.766274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.766640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.766722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.767062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.767139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.767432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.767542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.767898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.767982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.768267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.768322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.768651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.768734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.509 qpair failed and we were unable to recover it. 00:50:17.509 [2024-07-23 09:03:29.769027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.509 [2024-07-23 09:03:29.769110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.769453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.769530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.769882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.769965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.770323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.770408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.770756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.770826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.771174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.771257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.771630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.771714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.772051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.772126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.772406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.772491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.772839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.772921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.773268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.773379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.773752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.773836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.774179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.774262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.774597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.774643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.774987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.775070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.775401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.775485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.775826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.775905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.776261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.776363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.776685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.776767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.777109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.777187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.777560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.777644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.777972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.778055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.778405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.778497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.778866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.778949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.779287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.779398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.779743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.779817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.780173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.780254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.780628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.780713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.781047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.781116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.781435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.510 [2024-07-23 09:03:29.781519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.510 qpair failed and we were unable to recover it. 00:50:17.510 [2024-07-23 09:03:29.781858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.781941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.782295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.782404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.782752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.782834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.783175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.783258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.783631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.783715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.784083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.784165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.784534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.784618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.784965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.785041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.785348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.785433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.785798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.785882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.786215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.786286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.786655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.786738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.787072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.787154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.787523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.787624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.787966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.788048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.788395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.788481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.788818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.788864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.789139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.789222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.789542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.789589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.789881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.789927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.790199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.790291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.790672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.790756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.791106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.791188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.791552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.791636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.791996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.792079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.792390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.792437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.792798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.792881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.793225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.793323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.511 [2024-07-23 09:03:29.793640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.511 [2024-07-23 09:03:29.793685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.511 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.794061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.794146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.794461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.794570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.794864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.794909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.795236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.795349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.795710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.795794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.796147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.796226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.796607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.796692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.797036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.797119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.797466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.797548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.797895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.797977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.798325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.798410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.798745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.798813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.799185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.799269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.799668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.799752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.800049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.800095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.800440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.800524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.800891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.800975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.801331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.801403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.801725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.801807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.802160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.802243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.802561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.802608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.802946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.803030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.803379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.803465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.803814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.803902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.804265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.804365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.804711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.804795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.805143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.805229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.805591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.805676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.806024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.806108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.806393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.512 [2024-07-23 09:03:29.806440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.512 qpair failed and we were unable to recover it. 00:50:17.512 [2024-07-23 09:03:29.806801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.806883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.807228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.807355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.807712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.807795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.808118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.808201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.808558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.808625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.808960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.809031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.809410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.809496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.809844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.809927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.810227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.810271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.810528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.810612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.810923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.811005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.811343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.811415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.811789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.811873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.812195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.812277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.812638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.812684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.812969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.813038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.813375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.813462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.813758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.813804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.814150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.814233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.814572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.814655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.814987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.815071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.815386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.815433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.815719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.815803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.816139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.816213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.816576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.816659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.817013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.817095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.817440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.817488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.817810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.817892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.818243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.818344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.818634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.818681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.818972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.819055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.819397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.819481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.819806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.819890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.820241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.820339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.820719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.820804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.513 [2024-07-23 09:03:29.821110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.513 [2024-07-23 09:03:29.821169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.513 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.821548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.821633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.821938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.822021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.822325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.822371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.822737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.822821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.823168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.823252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.823636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.823729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.824089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.824173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.824534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.824619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.824970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.825060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.825405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.825491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.825816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.825900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.826240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.826330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.826645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.826727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.827069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.827152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.827506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.827592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.827950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.828032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.828389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.828472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.828802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.828876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.829244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.829346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.829706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.829789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.830130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.830212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.830533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.830617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.830976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.831060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.831401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.831476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.831821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.831903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.832209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.832292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.832659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.832737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.833094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.833176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.833492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.833576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.833908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.834002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.834364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.834448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.834791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.514 [2024-07-23 09:03:29.834874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.514 qpair failed and we were unable to recover it. 00:50:17.514 [2024-07-23 09:03:29.835208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.835290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.835681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.835763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.836127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.836211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.836507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.836553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.836857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.836939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.837296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.837397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.837738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.837813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.838169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.838251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.838616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.838699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.839045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.839122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.839496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.839581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.839936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.840020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.840344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.840391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.840755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.840838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.841217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.841301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.841654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.841723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.842036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.842119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.842460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.842545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.842853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.842900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.843244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.843371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.843633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.843716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.844015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.844060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.844384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.844469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.844811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.844893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.845228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.845304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.845665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.845749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.846077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.846160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.846512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.846589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.846900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.846984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.847343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.847426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.847838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.847922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.848226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.848351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.848710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.848792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.849079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.849126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.849457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.849541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.515 [2024-07-23 09:03:29.849888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.515 [2024-07-23 09:03:29.849971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.515 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.850307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.850392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.850734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.850815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.851170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.851253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.851632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.851703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.852060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.852152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.852452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.852537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.852878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.852925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.853266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.853368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.853688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.853771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.854053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.854099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.854427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.854513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.854853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.854936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.855272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.855360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.855714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.855796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.856153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.856237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.856591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.856672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.857033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.857115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.857461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.857546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.857871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.857917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.858262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.858366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.858650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.858733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.859076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.859152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.859564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.859650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.860008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.860091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.860399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.860444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.860783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.860866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.861221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.861305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.861666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.861739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.862094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.862176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.862546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.862630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.862984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.516 [2024-07-23 09:03:29.863077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.516 qpair failed and we were unable to recover it. 00:50:17.516 [2024-07-23 09:03:29.863444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.517 [2024-07-23 09:03:29.863528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.517 qpair failed and we were unable to recover it. 00:50:17.517 [2024-07-23 09:03:29.863870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.517 [2024-07-23 09:03:29.863951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.517 qpair failed and we were unable to recover it. 00:50:17.517 [2024-07-23 09:03:29.864251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.517 [2024-07-23 09:03:29.864297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.517 qpair failed and we were unable to recover it. 00:50:17.517 [2024-07-23 09:03:29.864685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.517 [2024-07-23 09:03:29.864770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.865127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.865210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.865557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.865628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.865967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.866050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.866407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.866492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.866831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.866905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.867243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.867360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.867705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.867788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.868090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.868137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.868484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.868531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.868860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.868953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.869324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.869397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.869662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.869745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.870087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.870169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.870532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.870624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.870965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.871047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.871406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.871491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.871792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.871838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.872143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.872226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.872542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.872589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.518 qpair failed and we were unable to recover it. 00:50:17.518 [2024-07-23 09:03:29.872917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.518 [2024-07-23 09:03:29.872963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.873330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.873415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.873762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.873847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.874180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.874251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.874631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.874716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.875067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.875174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.875530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.875612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.875963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.876045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.876411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.876497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.876809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.876855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.877197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.877280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.877611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.877696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.878047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.878141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.878485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.878570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.878881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.878964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.879256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.879302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.879659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.879743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.880110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.880193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.880507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.880554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.880898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.880981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.881349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.881433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.881740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.881786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.882082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.882165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.882523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.882608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.882894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.882940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.883245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.883362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.883686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.883769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.884111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.884157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.884404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.884502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.884859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.884941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.885285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.885392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.885670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.885753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.886110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.886193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.886536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.886634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.886931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.887014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.887283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.519 [2024-07-23 09:03:29.887400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.519 qpair failed and we were unable to recover it. 00:50:17.519 [2024-07-23 09:03:29.887756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.887803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.888073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.888155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.888498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.888583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.888908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.888954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.889341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.889426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.889771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.889854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.890151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.890197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.890424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.890502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.890852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.890936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.891200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.891246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.891581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.891664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.892001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.892368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.892415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.892766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.892848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.893146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.893229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.893515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.893561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.893870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.893954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.894261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.894362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.894645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.894691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.894954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.895036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.895393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.895479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.895821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.895867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.896156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.896239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.896630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.896714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.897042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.897089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.897362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.897450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.897790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.897872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.898201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.898264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.898626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.898713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.902357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.902461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.902795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.902847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.903183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.903270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.903708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.903795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.904107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.904166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.904504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.904602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.904927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.520 [2024-07-23 09:03:29.905011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.520 qpair failed and we were unable to recover it. 00:50:17.520 [2024-07-23 09:03:29.905350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.905400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.905779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.905864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.906190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.906275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.906960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.907049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.907447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.907535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.908337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.908431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.908786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.908867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.909184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.909568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.909653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.909991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.910087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.910448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.910534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.910888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.910973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.911330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.911423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.911729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.911812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.912150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.912233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.912557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.912603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.912900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.912984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.913301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.913399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.913732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.913821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.914176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.914259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.914581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.914665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.915008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.915054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.915360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.915446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.915790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.915874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.916168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.916214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.916442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.916539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.916834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.916918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.917241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.917332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.917640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.917725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.918072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.918155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.918482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.918580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.918923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.919005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.919415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.919503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.919804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.919850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.920204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.920287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.920658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.920742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.521 qpair failed and we were unable to recover it. 00:50:17.521 [2024-07-23 09:03:29.921056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.521 [2024-07-23 09:03:29.921133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.921440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.921524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.921879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.921973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.922285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.922341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.922638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.922722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.923032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.923114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.923454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.923533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.923841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.923925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.924275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.924378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.924687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.924759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.925082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.925166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.925488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.925573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.925923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.925985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.926345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.926430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.926740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.926823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.927094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.927141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.927515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.927601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.927891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.927974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.928272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.928330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.928667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.928750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.929083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.929165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.929485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.929532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.929885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.929968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.930328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.930413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.930759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.930859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.931155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.931263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.931635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.931718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.932045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.932128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.932477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.932562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.932915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.932999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.933304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.933361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.933618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.933702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.934009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.934092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.934417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.934480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.934826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.934910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.935246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.522 [2024-07-23 09:03:29.935363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.522 qpair failed and we were unable to recover it. 00:50:17.522 [2024-07-23 09:03:29.935706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.935752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.935964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.936062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.936402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.936488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.936829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.936906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.937214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.937296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.937626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.937709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.938032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.938137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.938454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.938538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.938891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.938974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.939329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.939408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.939698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.939780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.940124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.940207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.940578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.940625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.940920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.941003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.941350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.941435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.941756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.941848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.942150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.942233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.942589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.942687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.942985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.943032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.943225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.943336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.943708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.943791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.944120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.944210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.944551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.944634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.944957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.945040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.945333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.945380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.945696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.945778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.946116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.946199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.946531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.946578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.523 [2024-07-23 09:03:29.946889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.523 [2024-07-23 09:03:29.946971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.523 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.947281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.947382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.947690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.947737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.948046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.948128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.948444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.948528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.948876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.948943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.949268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.949368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.949753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.949836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.950107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.950153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.950467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.950554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.950866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.950950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.951275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.951395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.951737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.951800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.952137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.952220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.952566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.952612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.952964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.953046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.953370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.953454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.953757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.953803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.953981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.954033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.954252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.954354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.954688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.954778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.955119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.955201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.955520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.955588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.955909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.955989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.956334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.956419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.956705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.956812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.957091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.957137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.957471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.957518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.957797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.957880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.958176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.958222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.958598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.958682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.959028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.959109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.959462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.959559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.959906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.959988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.960295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.960399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.960745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.960836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.524 qpair failed and we were unable to recover it. 00:50:17.524 [2024-07-23 09:03:29.961189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.524 [2024-07-23 09:03:29.961270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.961607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.961691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.961993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.962039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.962230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.962294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.962663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.962746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.963014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.963070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.963370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.963454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.963803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.963886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.964212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.964258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.964640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.964724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.965062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.965145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.965476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.965568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.965919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.966003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.966361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.966446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.966782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.966873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.967178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.967261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.967649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.967733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.968069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.968115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.968348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.968433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.968718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.968800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.969134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.969201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.969564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.969647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.969988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.970080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.970413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.970507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.970825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.970909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.971235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.971334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.971682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.971757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.972124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.972206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.972526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.972572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.972896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.972942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.973151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.973233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.973513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.973560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.973838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.973931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.974243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.974359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.974708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.974791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.975112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.975203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.525 qpair failed and we were unable to recover it. 00:50:17.525 [2024-07-23 09:03:29.975557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.525 [2024-07-23 09:03:29.975642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.976000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.976083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.976427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.976501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.976856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.976939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.977249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.977348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.977666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.977712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.978012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.978094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.978437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.978521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.978855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.978929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.979281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.979383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.979729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.979812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.980149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.980220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.980536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.980619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.980968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.981049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.981393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.981473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.981826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.981909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.982334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.982422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.982728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.982787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.983142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.983224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.983628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.983713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.984060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.984139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.984491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.984575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.984932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.985016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.985351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.985425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.985779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.985862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.986201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.986285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.986656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.986747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.987108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.987190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.987541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.987588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.987941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.988023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.988344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.988431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.988678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.988760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.989094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.989166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.989537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.989610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.989954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.990037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.990340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.526 [2024-07-23 09:03:29.990387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.526 qpair failed and we were unable to recover it. 00:50:17.526 [2024-07-23 09:03:29.990697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.990780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.991135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.991219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.991574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.991652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.992007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.992089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.992418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.992503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.992852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.992930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.993288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.993388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.993737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.993817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.994131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.994177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.994517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.994601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.994903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.994984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.995332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.995411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.995728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.995811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.996150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.996232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.996585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.996660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.997020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.997102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.997447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.997532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.997839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.997886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.998226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.998327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.998690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.998773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.999083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.999129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.999451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.999534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:29.999833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:29.999915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.000267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.000374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.000681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.000763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.001107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.001190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.001510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.001557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.001902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.001984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.002301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.002405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.002710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.002756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.003071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.003163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.003476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.003560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.003895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.003967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.004306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.004410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.004687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.004767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.527 [2024-07-23 09:03:30.005075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.527 [2024-07-23 09:03:30.005120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.527 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.005421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.005506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.005825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.005907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.006246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.006337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.006659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.006742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.007081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.007163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.007479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.007525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.007839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.007922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.008238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.008338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.008702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.008785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.009101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.009207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.009544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.009591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.009930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.010001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.010347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.010432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.010774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.010857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.011166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.011212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.011549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.011633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.011972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.012054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.012413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.012506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.012840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.012903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.013221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.013305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.013673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.013720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.014000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.014082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.014434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.014519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.014849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.014895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.015125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.015199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.015557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.015604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.015849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.015921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.016193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.016276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.016628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.016712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.528 [2024-07-23 09:03:30.017059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.528 [2024-07-23 09:03:30.017133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.528 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.017487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.017551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.017908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.017992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.018287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.018343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.018661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.018724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.018997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.019066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.019277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.019330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.019577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.019620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.019815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.019859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.020120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.020165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.020347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.020391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.801 qpair failed and we were unable to recover it. 00:50:17.801 [2024-07-23 09:03:30.020715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.801 [2024-07-23 09:03:30.020798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.021094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.021138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.021416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.021500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.021820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.021902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.022235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.022305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.022599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.022680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.022960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.023042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.023384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.023459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.023808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.023890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.024237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.024338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.024667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.024712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.025065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.025149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.025497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.025583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.025881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.025927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.026267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.026367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.026640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.026722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.027066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.027129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.027497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.027582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.027928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.028011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.028343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.028410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.028734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.028816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.029171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.029253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.029606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.029676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.029995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.030077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.030396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.030481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.030807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.030875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.031180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.031262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.031574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.031658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.031919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.031964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.032275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.032375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.032677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.032760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.033084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.033156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.033536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.033622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.033965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.034072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.034405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.034495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.034856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.802 [2024-07-23 09:03:30.034938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.802 qpair failed and we were unable to recover it. 00:50:17.802 [2024-07-23 09:03:30.035244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.035355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.035698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.035771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.036078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.036162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.036459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.036542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.036837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.036883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.037219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.037301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.037674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.037756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.038051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.038097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.038436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.038521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.038863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.038945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.039278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.039364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.039677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.039760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.040081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.040165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.040506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.040584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.040930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.041012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.041351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.041435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.041766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.041831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.042186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.042269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.042598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.042682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.043018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.043090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.043436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.043521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.043837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.043919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.044251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.044349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.044656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.044739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.045082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.045164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.045484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.045535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.045835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.045918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.046217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.046300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.046653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.046715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.047057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.047140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.047484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.047569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.047901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.047969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.048268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.048382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.048729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.048812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.049106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.049153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.049479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.803 [2024-07-23 09:03:30.049563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.803 qpair failed and we were unable to recover it. 00:50:17.803 [2024-07-23 09:03:30.049902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.049984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.050326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.050403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.050750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.050834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.051173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.051236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.051596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.051703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.052003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.052066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.052349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.052434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.052774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.052820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.053071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.053153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.053497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.053581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.053913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.053979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.054343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.054753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.054836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.055165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.055234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.055598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.055683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.055996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.056078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.056432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.056507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.056849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.056931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.057271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.057389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.057703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.057770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.058118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.058201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.058548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.058595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.058943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.059019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.059336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.059404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.059696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.059780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.060120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.060219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.060602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.060687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.061024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.061107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.061432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.061500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.061885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.061976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.062262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.062332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.062575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.062634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.062859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.062916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.063173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.063225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.065241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.065329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.065572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.804 [2024-07-23 09:03:30.065625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.804 qpair failed and we were unable to recover it. 00:50:17.804 [2024-07-23 09:03:30.065862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.065946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.066273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.066357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.066709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.066792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.067231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.067370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.067749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.067850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.068177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.068262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.068620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.068706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.069034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.069083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.069401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.069450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.069700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.069783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.070110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.070157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.070538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.070584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.070919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.071001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.071341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.071388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.071674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.071758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.072071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.072152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.072464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.072509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.072775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.072859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.073226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.073323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.073598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.073666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.074013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.074095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.074401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.074448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.074759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.074842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.075194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.075276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.075655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.075738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.076077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.076151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.076519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.076567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.076845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.076929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.077259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.077306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.077580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.077659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.078033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.078116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.078415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.078462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.078716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.078799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.805 qpair failed and we were unable to recover it. 00:50:17.805 [2024-07-23 09:03:30.079113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.805 [2024-07-23 09:03:30.079207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.079563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.079632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.079953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.080036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.080399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.080446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.080696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.080771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.081101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.081185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.081503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.081550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.081789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.081835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.082168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.082251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.082569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.082634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.082947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.082993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.083348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.083419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.083667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.083746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.084050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.084097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.084462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.084553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.084889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.084971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.085359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.085406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.085687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.085787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.086148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.086230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.086517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.086564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.086882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.086964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.087286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.087391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.087668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.087769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.088079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.088161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.088464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.088511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.088752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.088820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.089172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.089256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.089618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.089701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.806 qpair failed and we were unable to recover it. 00:50:17.806 [2024-07-23 09:03:30.090004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.806 [2024-07-23 09:03:30.090050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.090410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.090458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.090742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.090824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.091173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.091251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.091561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.091635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.092000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.092084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.092374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.092421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.092683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.092764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.093056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.093139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.093495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.093543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.093859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.093942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.094286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.094391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.094671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.094774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.095108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.095192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.095452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.095496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.095766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.095854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.096204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.096286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.096570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.096660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.096997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.097069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.097403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.097451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.097732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.097814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.098152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.098231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.098558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.098624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.098942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.099024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.099341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.099387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.099681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.099765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.100049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.100132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.100425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.100473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.100733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.100830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.101162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.101245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.101602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.101686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.102040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.102122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.102417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.102464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.102776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.102860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.103180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.103262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.103648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.103731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.807 qpair failed and we were unable to recover it. 00:50:17.807 [2024-07-23 09:03:30.104072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.807 [2024-07-23 09:03:30.104146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.104495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.104543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.104813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.104896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.105250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.105345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.105631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.105714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.106043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.106126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.106463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.106510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.106797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.106880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.107164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.107247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.107596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.107690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.108044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.108127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.108437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.108483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.108706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.108753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.109117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.109200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.109555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.109651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.109994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.110067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.110405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.110457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.110749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.110832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.111158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.111223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.111554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.111624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.111949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.112032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.112379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.112427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.112648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.112732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.113045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.113127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.113465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.113511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.113820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.113904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.114198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.114280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.114615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.114708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.115051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.115133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.115476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.115524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.115797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.115886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.116228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.116325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.116625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.116708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.117005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.117051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.117375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.117421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.117665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.117711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.808 [2024-07-23 09:03:30.118039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.808 [2024-07-23 09:03:30.118108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.808 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.118438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.118485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.118778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.118862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.119161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.119207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.119487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.119534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.119812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.119896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.120231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.120300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.120630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.120713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.121029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.121111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.121408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.121456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.121758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.121840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.122149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.122232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.122592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.122672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.123026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.123111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.123427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.123511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.123848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.123924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.124272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.124388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.124686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.124771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.125105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.125175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.125525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.125572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.125884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.125977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.126329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.126377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.126644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.126728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.127071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.127153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.127473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.127529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.127844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.127927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.128240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.128364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.128603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.128649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.129022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.129103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.129460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.129507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.129732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.129778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.130147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.130229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.130518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.130563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.130900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.130985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.131357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.131424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.131664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.131748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.809 [2024-07-23 09:03:30.132104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.809 [2024-07-23 09:03:30.132193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.809 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.132575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.132672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.133012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.133095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.133366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.133413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.133695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.133777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.134059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.134142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.134443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.134502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.134759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.134842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.135197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.135280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.135640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.135739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.136069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.136151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.136499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.136547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.136858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.136903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.137224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.137306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.137587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.137669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.137960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.138006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.138279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.138407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.138698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.138783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.139107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.139198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.139576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.139622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.139900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.139981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.140291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.140350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.140649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.140731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.141062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.141144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.141440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.141492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.141785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.141868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.142162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.142244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.142506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.142552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.142857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.142940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.143307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.143392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.143652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.143698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.143911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.143994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.144374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.144438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.144690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.144737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.144970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.145057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.145375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.145422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.810 [2024-07-23 09:03:30.145594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.810 [2024-07-23 09:03:30.145638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.810 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.145957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.146041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.146392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.146438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.146708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.146754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.147001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.147083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.147439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.147485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.147715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.147761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.147997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.148072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.148423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.148470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.148716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.148788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.149133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.149215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.149546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.149647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.149987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.150059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.150424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.150471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.150674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.150772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.151068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.151114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.151400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.151446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.151763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.151826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.152111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.152157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.152450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.152497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.152784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.152866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.153163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.153209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.153483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.153529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.153835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.153917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.154263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.154351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.154641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.154687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.154925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.155008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.155319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.155366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.155590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.155681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.156015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.156098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.156394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.811 [2024-07-23 09:03:30.156439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.811 qpair failed and we were unable to recover it. 00:50:17.811 [2024-07-23 09:03:30.156680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.156726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.156990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.157072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.157348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.157411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.157663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.157769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.158135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.158218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.158559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.158648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.158976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.159059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.159406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.159452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.159683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.159728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.160010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.160092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.160444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.160491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.160735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.160781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.161084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.161166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.161473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.161519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.161813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.161890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.162208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.162290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.162642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.162724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.163029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.163075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.163414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.163460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.163720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.163801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.164122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.164211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.164548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.164643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.164989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.165073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.165418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.165466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.165683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.165765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.166110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.166192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.166510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.166557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.166854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.166939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.167226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.167323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.167573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.167618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.167942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.168032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.168383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.168430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.168620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.168665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.168904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.168987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.169356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.169428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.812 [2024-07-23 09:03:30.169612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.812 [2024-07-23 09:03:30.169655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.812 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.169921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.169967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.170274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.170381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.170661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.170737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.171020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.171103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.171452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.171499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.171741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.171806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.172121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.172204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.172503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.172550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.172787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.172833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.173117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.173200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.173518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.173564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.173819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.173866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.174106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.174189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.174489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.174536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.174729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.174776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.175075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.175157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.175479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.175526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.175811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.175857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.176188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.176270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.176622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.176668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.176905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.176950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.177279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.177393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.177654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.177737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.178074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.178120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.178368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.178441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.178668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.178714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.179080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.179172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.179526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.179573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.179879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.179962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.180306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.180392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.180628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.180705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.181052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.181159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.181501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.181548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.181748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.181831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.182193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.182276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.813 qpair failed and we were unable to recover it. 00:50:17.813 [2024-07-23 09:03:30.182577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.813 [2024-07-23 09:03:30.182623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.182996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.183078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.183404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.183451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.183687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.183733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.183935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.183981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.184197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.184279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.184569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.184642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.184985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.185068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.185394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.185439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.185721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.185767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.186037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.186120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.186424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.186470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.186653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.186699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.186964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.187046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.187371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.187438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.187636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.187680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.187993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.188074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.188417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.188465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.188710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.188780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.189142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.189225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.189583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.189630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.189910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.189956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.190211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.190293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.190604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.190650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.190839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.190883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.191190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.191273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.191559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.191605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.191811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.191856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.192085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.192167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.192444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.192492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.192764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.192856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.193216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.193299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.193619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.193665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.193986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.194069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.194392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.194439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.194721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.814 [2024-07-23 09:03:30.194767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.814 qpair failed and we were unable to recover it. 00:50:17.814 [2024-07-23 09:03:30.195095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.195141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.195451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.195506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.195754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.195837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.196166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.196212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.196587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.196682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.197033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.197116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.197399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.197446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.197753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.197837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.198146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.198228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.198509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.198555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.198740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.198808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.199132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.199215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.199558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.199604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.199848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.199894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.200201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.200283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.200596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.200665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.200974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.201057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.201376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.201424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.201628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.201674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.202007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.202090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.202426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.202472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.202735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.202780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.203061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.203144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.203463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.203510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.203737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.203796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.204136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.204219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.204534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.204580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.204886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.204932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.205267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.205369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.205635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.205680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.205956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.206048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.206397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.206444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.206666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.206748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.207035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.207081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.207344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.815 [2024-07-23 09:03:30.207425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.815 qpair failed and we were unable to recover it. 00:50:17.815 [2024-07-23 09:03:30.207659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.207754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.208084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.208130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.208453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.208500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.208734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.208824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.209178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.209257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.209597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.209642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.209921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.210004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.210376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.210423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.210696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.210742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.210998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.211080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.211352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.211399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.211648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.211731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.212022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.212105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.212433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.212480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.212747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.212830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.213172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.213265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.213602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.213649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.213988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.214069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.214414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.214461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.214732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.214825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.215151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.215234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.215626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.215710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.216065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.216159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.216473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.216519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.216782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.216864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.217124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.217170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.217435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.217786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.217868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.218168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.218213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.816 [2024-07-23 09:03:30.218557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.816 [2024-07-23 09:03:30.218604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.816 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.218911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.218996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.219388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.219436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.219646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.219693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.220014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.220096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.220378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.220425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.220632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.220678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.220994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.221077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.221405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.221451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.221681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.221726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.222090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.222174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.222482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.222529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.223771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.223867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.224228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.224337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.224623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.224669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.224910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.224977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.225303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.225362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.225584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.225630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.225900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.225947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.226202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.226249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.226484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.226532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.226798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.226844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.227038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.227084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.227289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.227358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.227568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.227628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.227853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.227899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.228162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.228215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.228415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.228462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.228698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.228745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.228989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.229035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.229231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.229277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.229541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.229588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.229819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.229865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.230059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.230107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.230326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.230372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.230572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.230618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.817 [2024-07-23 09:03:30.230856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.817 [2024-07-23 09:03:30.230902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.817 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.231138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.231184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.231420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.231466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.231707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.231754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.231934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.231978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.232143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.232189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.232392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.232440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.232622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.232669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.232897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.232943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.233157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.233204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.233413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.233459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.233741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.233788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.234105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.234171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.234496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.234545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.234842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.234890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.235185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.235231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.235453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.235499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.235782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.235850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.236140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.236190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.236428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.236478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.236691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.236738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.236992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.237037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.237252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.237323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.237496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.237539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.237692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.237737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.237952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.238017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.238219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.238266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.238454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.238499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.238677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.238749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.238912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.238992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.239240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.239287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.239512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.239558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.239869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.239939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.240181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.240243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.240483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.240545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.240748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.240811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.818 [2024-07-23 09:03:30.241054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.818 [2024-07-23 09:03:30.241118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.818 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.241377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.241425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.241562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.241616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.241866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.241931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.242150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.242197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.245589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.245646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.245910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.245956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.246098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.246154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.246391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.246438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.246703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.246751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.247007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.247081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.247333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.247390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.247581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.247651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.247953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.248022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.248305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.248375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.248557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.248633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.248911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.248959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.249234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.249281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.249481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.249527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.249788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.249852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.250108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.250157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.250418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.250489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.250790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.250855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.251105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.251170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.251396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.251465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.251735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.251801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.252008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.252072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.252356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.252402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.252681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.252749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.253026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.253090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.253378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.253424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.253641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.253704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.253970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.254035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.254287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.254356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.254505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.254551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.254747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.254793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.819 qpair failed and we were unable to recover it. 00:50:17.819 [2024-07-23 09:03:30.255050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.819 [2024-07-23 09:03:30.255096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.255306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.255374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.255564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.255630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.255912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.255980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.256187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.256234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.256458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.256505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.256709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.256774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.257057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.257120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.257391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.257437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.257684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.257748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.257982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.258045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.258276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.258333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.258605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.258679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.258932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.258999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.259200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.259246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.259465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.259530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.259821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.259885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.260070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.260134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.260392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.260465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.260739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.260812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.261031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.261094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.261329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.261402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.261603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.261666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.261915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.261978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.262219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.262265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.262515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.262569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.262808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.262870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.263121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.263185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.263439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.263504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.263757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.263820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.264062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.264127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.264386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.264452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.264714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.264779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.265020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.265085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.265357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.265406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.265695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.820 [2024-07-23 09:03:30.265760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.820 qpair failed and we were unable to recover it. 00:50:17.820 [2024-07-23 09:03:30.266021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.266084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.266352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.266400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.266648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.266714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.267028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.267076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.267357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.267404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.267616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.267682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.267999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.268046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.268335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.268381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.268601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.268647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.268871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.268937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.269225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.269292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.269573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.269620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.269915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.269991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.270226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.270271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.270561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.270606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.270828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.270890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.271146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.271210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.271421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.271468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.271648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.271722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.272019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.272090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.272323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.272371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.272613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.272676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.272922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.272984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.273172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.273218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.273463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.273510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.273767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.273835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.274076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.274139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.274385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.274458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.274690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.274752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.274985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.275057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.821 [2024-07-23 09:03:30.275335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.821 [2024-07-23 09:03:30.275381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.821 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.275634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.275696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.276001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.276071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.276305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.276364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.276585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.276631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.276846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.276909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.277212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.277282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.277569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.277615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.277774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.277841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.278056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.278131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.278424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.278489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.278693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.278739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.278962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.279008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.279240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.279286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.279555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.279627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.279902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.279948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.280217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.280263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.280574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.280642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.280849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.280913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.281199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.281280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.281599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.281664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.281852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.281915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.282217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.282288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.282548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.282616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.282908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.282984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.283260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.283307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.283570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.283616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.283908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.283972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.284211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.284257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.284503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.284550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.284806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.284869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.285152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.285215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.285462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.285509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.285722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.285787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.286080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.286158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.286357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.286403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.822 qpair failed and we were unable to recover it. 00:50:17.822 [2024-07-23 09:03:30.286643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.822 [2024-07-23 09:03:30.286706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.286983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.287047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.287330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.287377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.287641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.287715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.287981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.288045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.288331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.288378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.288618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.288665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.288944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.289017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.289195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.289241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.289518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.289565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.289819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.289886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.290184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.290231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.290422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.290469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.290761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.290826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.291055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.291117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.291359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.291406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.291602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.291668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.291969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.292044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.292279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.292335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.292560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.292605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.292808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.292871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.293155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.293218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.293407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.293453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.293734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.293805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.294018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.294082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.294347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.294394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.294680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.294749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.294995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.295059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.295291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.295350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.295568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.295614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.295887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.295933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.296179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.296241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.296529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.296576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.296856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.296921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.297171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.297234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.297490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.297555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.823 [2024-07-23 09:03:30.297804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.823 [2024-07-23 09:03:30.297870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.823 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.298156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.298220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.298522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.298594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.298831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.298896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.299193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.299263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.299515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.299579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.299839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.299904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.300169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.300237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.300458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.300523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.300777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.300841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.301144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.301219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.301500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.301578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.301831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.301895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.302185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.302249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.302505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.302570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.302847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.302893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.303184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.303248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.303546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.303611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.303876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.303942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.304187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.304233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.304497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.304562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.304869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.304939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.305216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.305262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.305477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.305547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.305784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.305858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.306155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.306232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.306512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.306594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.306902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.306977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.307243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.307289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.307533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.307579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.307886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.307955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.308194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.308240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.308488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.308538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:17.824 [2024-07-23 09:03:30.308727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:17.824 [2024-07-23 09:03:30.308791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:17.824 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.309099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.309164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.309352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.309399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.309662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.309726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.309962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.310024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.310262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.310318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.310535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.310609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.310851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.310912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.311127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.311192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.311427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.311494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.311702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.311767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.311985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.312049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.312294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.312352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.312555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.312624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.312878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.312957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.313177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.313223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.313502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.313573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.313801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.313866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.314114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.314182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.314428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.314499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.314702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.314772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.315014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.315061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.315351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.315398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.315661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.315728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.315921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.315994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.316282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.316348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.316592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.316657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.316932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.316995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.317286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.317358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.317598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.317649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.317893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.317961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.114 [2024-07-23 09:03:30.318157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.114 [2024-07-23 09:03:30.318207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.114 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.318483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.318532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.318837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.318909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.319196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.319269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.319563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.319636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.319939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.320007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.320302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.320357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.320541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.320588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.320797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.320865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.321110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.321177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.321499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.321550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.321779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.321860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.322150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.322222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.322525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.322596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.322806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.322875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.323111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.323179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.323441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.323513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.323801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.323872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.324155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.324222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.324465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.324516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.324782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.324849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.325104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.325171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.325396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.325464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.325749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.325802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.326013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.326081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.326274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.326337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.326658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.326734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.326997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.327065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.327259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.327322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.327633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.327682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.327921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.327988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.328247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.328293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.328550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.328602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.328862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.328915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.329187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.329254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.329524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.329593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.329849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.115 [2024-07-23 09:03:30.329918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.115 qpair failed and we were unable to recover it. 00:50:18.115 [2024-07-23 09:03:30.330159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.330210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.330473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.330543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.330854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.330908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.331130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.331181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.331385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.331460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.331728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.331779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.331992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.332059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.332347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.332398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.332603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.332673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.332964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.333044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.333279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.333339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.333640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.333711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.333969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.334035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.334290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.334352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.334594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.334645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.334913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.334962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.335211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.335257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.335506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.335553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.335852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.335902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.336177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.336228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.336440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.336490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.336770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.336821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.337075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.337142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.337393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.337462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.337761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.337811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.338115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.338183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.338423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.338477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.338726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.338790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.338993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.339058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.339324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.339370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.339668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.339737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.339954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.340018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.340279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.340342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.340534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.340581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.340846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.340911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.116 [2024-07-23 09:03:30.341189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.116 [2024-07-23 09:03:30.341254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.116 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.341497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.341544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.341752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.341816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.342113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.342192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.342503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.342567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.342819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.342882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.343127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.343190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.343471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.343535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.343843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.343889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.344179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.344256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.344549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.344620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.344897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.344962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.345186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.345232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.345483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.345547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.345844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.345890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.346130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.346192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.346507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.346556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.346781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.346846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.347099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.347166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.347420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.347483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.347770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.347836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.348084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.348132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.348397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.348467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.348684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.348748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.349002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.349066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.349328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.349374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.349613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.349678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.349956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.350025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.350292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.350349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.350586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.350631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.350888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.350951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.351237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.351307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.351539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.351582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.351840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.351902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.352950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.353002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.353261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.353332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.353600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.117 [2024-07-23 09:03:30.353676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.117 qpair failed and we were unable to recover it. 00:50:18.117 [2024-07-23 09:03:30.353966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.354043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.354301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.354360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.354602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.354648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.354902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.354967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.355183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.355248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.355510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.355557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.355816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.355879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.356149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.356212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.356546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.356617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.356878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.356939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.357179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.357223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.357411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.357456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.357737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.357802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.358089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.358155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.358469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.358538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.358871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.358960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.359329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.359399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.359657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.359740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.360095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.360179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.360501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.360548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.360896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.360978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.361367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.361414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.361709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.361792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.362118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.362200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.362542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.362634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.362953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.363035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.363342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.363389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.363643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.363722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.364075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.364156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.118 [2024-07-23 09:03:30.364475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.118 [2024-07-23 09:03:30.364521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.118 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.364815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.364897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.365258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.365370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.365623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.365684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.366045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.366129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.366453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.366506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.366749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.366792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.367165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.367247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.367582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.367666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.367975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.368057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.368403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.368450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.368708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.368790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.369152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.369236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.369559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.369648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.369990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.370073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.370393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.370440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.370729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.370812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.371125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.371207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.371516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.371563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.371924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.372007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.372354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.372401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.372627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.372672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.373007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.373090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.373388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.373435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.373666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.373709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.374075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.374183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.374551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.374598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.374952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.375027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.375338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.375423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.375676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.375759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.376077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.376123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.376471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.376517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.376856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.376939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.377295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.377385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.119 [2024-07-23 09:03:30.377614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.119 [2024-07-23 09:03:30.377660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.119 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.377970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.378052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.378408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.378456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.378705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.378789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.379145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.379228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.379545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.379590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.379943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.380025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.380398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.380445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.380719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.380812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.381131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.381212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.381527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.381573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.381849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.381901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.382237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.382338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.382673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.382756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.383103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.383189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.383546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.383593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.383883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.383966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.384276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.384376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.384629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.384707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.385028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.385111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.385440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.385486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.385746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.385828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.386146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.386229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.386596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.386685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.387028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.387110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.387447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.387494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.387690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.387734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.388023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.388105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.388392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.388448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.388733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.388827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.389163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.389245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.389570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.389616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.389932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.389978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.390216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.390261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.390595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.390687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.391010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.391056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.391388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.120 [2024-07-23 09:03:30.391435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.120 qpair failed and we were unable to recover it. 00:50:18.120 [2024-07-23 09:03:30.391672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.391716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.391962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.392036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.392397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.392444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.392720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.392803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.393068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.393114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.393402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.393449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.393700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.393783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.394096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.394142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.394458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.394505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.394714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.394758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.395025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.395112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.395409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.395456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.395704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.395786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.396102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.396148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.396424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.396476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.396743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.396825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.397116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.397162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.397468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.397515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.397783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.397887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.398147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.398193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.398544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.398591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.398829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.398894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.399205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.399251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.399535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.399609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.399933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.400015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.400302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.400357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.400558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.400619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.400954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.401037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.401349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.401396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.401656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.401740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.402009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.402088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.402431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.402478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.402729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.402811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.403182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.403265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.403597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.403643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.403909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.121 [2024-07-23 09:03:30.404007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.121 qpair failed and we were unable to recover it. 00:50:18.121 [2024-07-23 09:03:30.404357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.404425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.404710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.404810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.405152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.405234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.405607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.405692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.406000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.406046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.406412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.406460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.406691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.406735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.406975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.407018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.407402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.407447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.407730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.407813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.408122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.408169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.408440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.408487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.408797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.408880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.409183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.409229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.409594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.409639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.409899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.409981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.410230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.410275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.410516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.410560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.410899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.410980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.411326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.411398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.411670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.411759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.412081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.412164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.412527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.412623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.412951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.413032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.413307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.413411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.413648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.413694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.414018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.414101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.414378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.414423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.414643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.414689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.414990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.415072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.415370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.415436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.415620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.415664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.415945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.416028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.416347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.416424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.416674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.416746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.417100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.122 [2024-07-23 09:03:30.417183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.122 qpair failed and we were unable to recover it. 00:50:18.122 [2024-07-23 09:03:30.417513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.417560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.417842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.417937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.418256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.418355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.418664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.418748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.419088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.419134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.419425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.419472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.419755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.419837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.420136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.420182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.420481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.420528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.420709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.420761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.421041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.421165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.421488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.421534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.421806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.421889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.422177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.422222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.422466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.422511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.422841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.422923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.423232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.423277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.423491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.423536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.423840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.423922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.424208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.424254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.424496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.424542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.424857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.424939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.425260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.425305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.425533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.425621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.425907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.425988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.426252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.426298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.426608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.426654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.426898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.426971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.427324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.427370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.427641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.427724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.123 [2024-07-23 09:03:30.428032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.123 [2024-07-23 09:03:30.428114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.123 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.428388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.428435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.428728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.428812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.429171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.429254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.429508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.429553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.429847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.429930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.430249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.430345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.430594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.430638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.430945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.431028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.431378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.431444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.431688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.431756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.432069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.432152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.432492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.432539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.432772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.432818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.433180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.433261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.433592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.433690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.434018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.434064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.434268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.434365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.434686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.434767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.435050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.435101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.435400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.435446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.435697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.435779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.436036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.436081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.436406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.436453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.436704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.436786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.437115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.437201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.437519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.437565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.437740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.437784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.438056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.438150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.438489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.438535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.438764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.438847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.439181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.439244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.439624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.439706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.439988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.440072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.440377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.440424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.440704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.440801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.441120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.124 [2024-07-23 09:03:30.441203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.124 qpair failed and we were unable to recover it. 00:50:18.124 [2024-07-23 09:03:30.441566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.441651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.441959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.442040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.442386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.442433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.442627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.442671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.442975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.443057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.443378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.443424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.443700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.443794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.444102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.444265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.444618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.444702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.445071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.445172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.445525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.445573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.445871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.445955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.446262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.446321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.446540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.446585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.446880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.446964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.447221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.447267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.447523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.447598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.447964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.448047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.448396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.448444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.448737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.448819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.449112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.449542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.449588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.449896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.449989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.450346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.450426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.450694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.450740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.450953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.450996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.451257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.451369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.451629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.451675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.452040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.452103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.452410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.452457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.452779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.452863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.453181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.453261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.453645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.453727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.454012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.454058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.454349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.454425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.454706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.125 [2024-07-23 09:03:30.454788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.125 qpair failed and we were unable to recover it. 00:50:18.125 [2024-07-23 09:03:30.455139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.455185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.455430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.455477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.455728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.455811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.456103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.456149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.456403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.456447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.456690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.456773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.457058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.457103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.457361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.457405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.457703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.457785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.458091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.458137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.458452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.458498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.458744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.458827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.459132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.459177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.459454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.459500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.459782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.459864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.460227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.460344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.460611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.460680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.460997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.461079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.461430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.461478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.461746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.461828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.462106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.462188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.462483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.462529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.462826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.462909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.463206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.463287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.463605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.463651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.463878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.463960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.464334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.464420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.464700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.464794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.465137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.465219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.465518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.465564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.465898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.465972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.466279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.466393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.466586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.466630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.466899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.466988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.467353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.467425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.467661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.126 [2024-07-23 09:03:30.467719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.126 qpair failed and we were unable to recover it. 00:50:18.126 [2024-07-23 09:03:30.467963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.468033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.468405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.468450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.468749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.468832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.469158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.469203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.469581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.469630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.470001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.470083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.470340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.470386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.470637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.470719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.471030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.471111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.471411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.471457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.471688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.471770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.472122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.472204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.472566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.472612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.472931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.473013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.473348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.473420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.473653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.473699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.474060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.474142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.474508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.474554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.474794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.474837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.475160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.475242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.479592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.479718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.480104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.480194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.480592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.480716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.481082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.481173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.481484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.481533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.481848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.481932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.482270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.482380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.482611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.482655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.483012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.483093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.483402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.483450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.483737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.483841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.484158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.484240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.484530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.484577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.484852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.484896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.485142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.485188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.485417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.485462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.127 qpair failed and we were unable to recover it. 00:50:18.127 [2024-07-23 09:03:30.485734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.127 [2024-07-23 09:03:30.485828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.486147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.486228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.486601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.486648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.486874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.486918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.487228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.487331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.487662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.487744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.488086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.488132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.488408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.488456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.488685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.488769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.489040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.489085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.489419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.489465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.489694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.489776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.490111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.490183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.490475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.490522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.490796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.490879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.491202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.491248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.491529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.491600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.491942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.492025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.492358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.492405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.492613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.492679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.492997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.493078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.493439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.493485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.493774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.493858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.494169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.494251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.494599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.494645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.494945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.495028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.495409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.495456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.495659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.495716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.496072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.496154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.496501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.496548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.128 [2024-07-23 09:03:30.496790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.128 [2024-07-23 09:03:30.496836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.128 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.497204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.497286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.497624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.497707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.498039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.498114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.498422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.498474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.498721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.498803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.499137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.499210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.499493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.499538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.499858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.499940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.500288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.500402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.500685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.500767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.501111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.501192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.501513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.501559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.501878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.501960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.502237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.502336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.502652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.502749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.503068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.503151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.503484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.503531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.503811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.503899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.504237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.504337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.504676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.504760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.505092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.505170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.505471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.505515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.505752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.505834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.506146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.506218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.506513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.506559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.506848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.506930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.507196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.507241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.507517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.507563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.507838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.507920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.508191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.508236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.508531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.508578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.508902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.508985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.509298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.509355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.509522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.509587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.509896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.509985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.510355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.129 [2024-07-23 09:03:30.510401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.129 qpair failed and we were unable to recover it. 00:50:18.129 [2024-07-23 09:03:30.510589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.510673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.510957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.511038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.511349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.511396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.511619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.511702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.512048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.512132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.512447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.512492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.512752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.512834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.513159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.513251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.513551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.513597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.513915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.513997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.514330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.514415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.514658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.514725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.515048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.515133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.515438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.515485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.515695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.515740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.516059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.516151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.516429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.516474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.516647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.516692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.516913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.516993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.517342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.517423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.517671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.517742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.518080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.518161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.518437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.518482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.518694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.518740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.518995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.519099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.519419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.519478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.519670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.519714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.520010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.520093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.520426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.520472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.520724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.520799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.521115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.521196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.521508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.521555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.521854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.521900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.522239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.522337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.522594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.522671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.522990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.523036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.130 qpair failed and we were unable to recover it. 00:50:18.130 [2024-07-23 09:03:30.523418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.130 [2024-07-23 09:03:30.523464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.523721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.523803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.524113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.524159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.524508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.524555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.524821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.524903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.525236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.525334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.525581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.525664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.525999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.526081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.526400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.526446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.526708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.526790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.527106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.527197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.527491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.527543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.527827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.527909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.528198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.528279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.528548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.528593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.528912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.528995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.529343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.529426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.529623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.529669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.529907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.529988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.530341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.530418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.530688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.530778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.531070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.531151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.531442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.531489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.531751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.531842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.532169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.532251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.532568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.532639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.532915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.532961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.533236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.533334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.533643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.533726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.534003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.534048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.534403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.534450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.534677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.534761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.535087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.535172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.535538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.535585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.535802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.535884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.536228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.536328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.131 qpair failed and we were unable to recover it. 00:50:18.131 [2024-07-23 09:03:30.536557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.131 [2024-07-23 09:03:30.536629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.536958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.537051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.537379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.537425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.537652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.537699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.537860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.537906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.538101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.538148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.538369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.538416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.538599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.538646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.538867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.538913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.539164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.539208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.539372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.539416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.539605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.539648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.539856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.539900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.540122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.540166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.540384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.540429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.540641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.540693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.540911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.540969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.541187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.541234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.541430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.541477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.541697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.541743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.542001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.542048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.542221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.542265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.542479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.542526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.542743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.542789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.543032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.543079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.543269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.543335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.543599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.543644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.543883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.543929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.544187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.544232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.544422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.544468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.544714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.544760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.545011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.545057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.545287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.545345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.545601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.545646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.545908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.545953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.546165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.546237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.132 [2024-07-23 09:03:30.546494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.132 [2024-07-23 09:03:30.546545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.132 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.546847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.546906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.547192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.547260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.547512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.547562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.547848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.547902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.548175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.548227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.548512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.548560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.548766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.548841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.549030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.549077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.549332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.549387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.549643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.549690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.549966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.550012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.550275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.550331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.550521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.550566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.550773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.550850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.551098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.551175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.551417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.551464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.551751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.551820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.552141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.552212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.552463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.552516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.552777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.552844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.553096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.553142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.553449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.553514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.553719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.553782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.554007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.554071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.554323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.554369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.554583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.554629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.554873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.554937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.555163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.555209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.555462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.555509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.555812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.555887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.133 qpair failed and we were unable to recover it. 00:50:18.133 [2024-07-23 09:03:30.556114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.133 [2024-07-23 09:03:30.556179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.556445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.556511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.556777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.556841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.557125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.557192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.557450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.557515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.557739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.557804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.558075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.558140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.558386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.558463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.558730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.558794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.559064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.559126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.559392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.559459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.559690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.559753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.559984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.560049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.560255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.560300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.560589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.560654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.560942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.561019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.561186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.561231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.561516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.561587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.561862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.561923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.562106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.562152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.562403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.562473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.562747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.562818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.563093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.563162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.563460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.563509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.563748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.563814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.564094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.564157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.564409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.564475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.564730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.564792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.564978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.565047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.565326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.565373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.565617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.565682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.565933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.565995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.566271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.566326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.566615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.566693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.566997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.567064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.567299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.567356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.134 [2024-07-23 09:03:30.567637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.134 [2024-07-23 09:03:30.567683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.134 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.567960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.568023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.568289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.568356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.568633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.568678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.568974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.569021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.569285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.569343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.569622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.569668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.569918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.569980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.570260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.570334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.570610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.570655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.570894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.570958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.571259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.571339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.571567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.571626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.571922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.571991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.572221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.572266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.572491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.572537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.572814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.572884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.573194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.573240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.573589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.573638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.573898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.573960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.574229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.574275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.574554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.574600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.574795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.574856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.575108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.575170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.575450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.575515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.575752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.575814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.576062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.576125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.576418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.576484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.576764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.576835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.577120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.577189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.577468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.577532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.577823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.577890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.578159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.578211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.578507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.578581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.578823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.578885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.579138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.579200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.579448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.135 [2024-07-23 09:03:30.579495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.135 qpair failed and we were unable to recover it. 00:50:18.135 [2024-07-23 09:03:30.579700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.579766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.580043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.580107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.580398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.580445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.580684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.580746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.581028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.581089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.581363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.581410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.581699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.581767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.582005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.582070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.582342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.582389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.582647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.582712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.582940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.583003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.583231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.583276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.583516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.583561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.583853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.583919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.584213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.584284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.584569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.584615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.584892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.584954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.585225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.585270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.585517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.585564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.585817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.585863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.586146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.586216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.586449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.586497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.586807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.586875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.587149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.587212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.587465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.587518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.587803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.587872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.588131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.588193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.588481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.588545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.588802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.588848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.589123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.589188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.589483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.589546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.589803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.589865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.590102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.590166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.590420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.590486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.590782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.590856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.591148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.136 [2024-07-23 09:03:30.591211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.136 qpair failed and we were unable to recover it. 00:50:18.136 [2024-07-23 09:03:30.591467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.591530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.591775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.591837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.592154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.592201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.592448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.592526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.592824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.592897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.593202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.593249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.593498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.593561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.593869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.593916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.594147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.594193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.594491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.594539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.594842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.594913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.595188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.595233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.595488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.595553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.595846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.595915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.596206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.596271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.596524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.596588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.596881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.596944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.597175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.597221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.597467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.597530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.597776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.597841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.598051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.598115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.598351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.598399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.598654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.598719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.599006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.599297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.599358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.599628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.599674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.599971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.600049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.600274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.600329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.600561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.600607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.600894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.600960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.601254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.601338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.601576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.601622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.601857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.601923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.602160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.602223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.602480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.602527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.602734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.137 [2024-07-23 09:03:30.602798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.137 qpair failed and we were unable to recover it. 00:50:18.137 [2024-07-23 09:03:30.603076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.603147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.603329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.603376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.603690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.603737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.604029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.604093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.604381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.604428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.604678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.604743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.605039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.605110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.605378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.605425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.605683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.605748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.605991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.606052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.606331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.606378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.606647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.606693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.606941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.607004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.607272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.607325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.607570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.607617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.607893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.607957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.608208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.608273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.608568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.608614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.608912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.608983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.609210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.609256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.609542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.609588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.609817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.609879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.610164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.610234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.610529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.610575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.610850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.610915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.611153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.611217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.611514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.611577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.611858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.611928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.612176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.612236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.612532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.612602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.612862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.612937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.138 qpair failed and we were unable to recover it. 00:50:18.138 [2024-07-23 09:03:30.613180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.138 [2024-07-23 09:03:30.613239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.613529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.613602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.613892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.613957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.614190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.614236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.614555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.614602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.614843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.614904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.615144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.615208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.615494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.615560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.615807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.615869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.616145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.616211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.616443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.616507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.616766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.616834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.617110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.617172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.617487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.617534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.617833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.617902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.618151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.618197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.618411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.618475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.618756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.618827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.619122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.619203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.619457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.619521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.619783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.619854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.620105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.620173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.620432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.620499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.620794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.620859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.621150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.621214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.621499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.621575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.139 [2024-07-23 09:03:30.621873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.139 [2024-07-23 09:03:30.621949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.139 qpair failed and we were unable to recover it. 00:50:18.417 [2024-07-23 09:03:30.622190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.417 [2024-07-23 09:03:30.622237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.417 qpair failed and we were unable to recover it. 00:50:18.417 [2024-07-23 09:03:30.622545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.417 [2024-07-23 09:03:30.622615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.417 qpair failed and we were unable to recover it. 00:50:18.417 [2024-07-23 09:03:30.622865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.622932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.623207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.623254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.623578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.623657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.623904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.623971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.624248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.624294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.624610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.624657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.624954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.625031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.625298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.625360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.625653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.625728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.626021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.626086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.626393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.626468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.626774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.626821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.627130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.627177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.627450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.627496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.627791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.627856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.628142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.628209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.628405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.628451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.628688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.628751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.629050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.629122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.629339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.629385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.629672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.629745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.629960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.630023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.630271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.630338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.630584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.630648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.630941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.631009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.631241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.631288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.631587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.631635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.631853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.631915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.632148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.632211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.632454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.632520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.632816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.632880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.633172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.633244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.633564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.633618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.633878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.633942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.634227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.634274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.634546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.634626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.634893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.634956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.635173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.635221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.635534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.635609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.635866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.635929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.636184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.636249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.636451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.636516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.636757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.636806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.637046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.637116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.637352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.637420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.637655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.637718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.418 [2024-07-23 09:03:30.638026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.418 [2024-07-23 09:03:30.638084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.418 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.638418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.638468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.638761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.638826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.639106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.639169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.639390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.639465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.641213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.641267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.641542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.641606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.641867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.641916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.642140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.642186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.642455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.642503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.642701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.642766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.644550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.644606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.644907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.644965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.645204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.645252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.645536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.645585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.645839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.645904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.646172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.646218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.646517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.646589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.646883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.646961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.647192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.647240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.647558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.647629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.647893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.647959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.648173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.648220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.648485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.648533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.648830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.648893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.649086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.649158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.649409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.649479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.649740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.649805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.650047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.650110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.650356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.650404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.650675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.650739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.650992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.651057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.651244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.651290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.651556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.651626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.419 [2024-07-23 09:03:30.651874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.419 [2024-07-23 09:03:30.651921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.419 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.652153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.652200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.652473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.652522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.652808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.652872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.653072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.653140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.653336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.653383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.653688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.653754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.654004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.654068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.654295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.654354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.654607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.654673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.654895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.654965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.655205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.655251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.655579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.655662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.655914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.655977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.656258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.656304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.656518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.656581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.656870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.656934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.657205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.657253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.657514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.657562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.657849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.657923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.658201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.658267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.658545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.658625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.658903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.658950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.659223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.659268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.659498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.659545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.659818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.659882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.660197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.660248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.660564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.660629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.660915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.660979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.661245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.661291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.661584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.661640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.661927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.662004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.662205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.662251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.662543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.662590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.662828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.662892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.663135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.663199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.663452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.663503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.663754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.663816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.664066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.664130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.664444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.664491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.664750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.664813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.665131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.665178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.665396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.665462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.665721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.665786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.666020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.666096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.666354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.666402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.666660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.666725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.666948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.667000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.667284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.667343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.667606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.667654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.667952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.668007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.668221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.668268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.668515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.668562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.668773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.668856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.669193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.669277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.669578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.669655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.669942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.670025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.670375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.670424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.670649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.670735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.671056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.671140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.671470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.671517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.671862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.671945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.672213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.672296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.420 qpair failed and we were unable to recover it. 00:50:18.420 [2024-07-23 09:03:30.672540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.420 [2024-07-23 09:03:30.672586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.672900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.672984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.673359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.673405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.673629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.673723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.674029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.674113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.674400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.674447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.674632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.674716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.675079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.675162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.675480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.675528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.675817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.675900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.676222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.676304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.676592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.676639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.676866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.676911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.677207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.677290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.677440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:50:18.421 [2024-07-23 09:03:30.677799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.677864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.678151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.678204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.678487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.678535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.678806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.678853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.679158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.679226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.679456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.679504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.679800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.679848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.680130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.680193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.680394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.680442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.680729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.680849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.681160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.681249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.681490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.681538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.681773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.681855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.682206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.682289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.682550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.682643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.682969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.683054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.683367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.683413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.683606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.683689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.684000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.684085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.684390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.684435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.684661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.684707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.684944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.685020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.685343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.685411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.685626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.685740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.686019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.686102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.686402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.686448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.686648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.686742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.687069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.687153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.687421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.687470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.687751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.687797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.688011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.688093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.688363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.688410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.688604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.688650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.688854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.688936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.689175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.689258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.689504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.689552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.689772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.689854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.690102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.690184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.690415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.690461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.690632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.690714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.690983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.691065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.691346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.691392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.691570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.691615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.691802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.691884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.692195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.421 [2024-07-23 09:03:30.692279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.421 qpair failed and we were unable to recover it. 00:50:18.421 [2024-07-23 09:03:30.692476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.692521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.692847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.692933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.693187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.693277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.693478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.693523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.693688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.693733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.693953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.693998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.694235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.694280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.694444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.694489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.694755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.694853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.695138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.695189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.695363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.695410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.695583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.695629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.695812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.695858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.696090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.696135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.696351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.696398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.696645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.696700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.696889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.696939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.697099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.697143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.697370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.697416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.697580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.697624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.697846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.697890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.698100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.698151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.698354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.698398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.698617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.698684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.698936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.698991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.699216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.699299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.699504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.699551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.699796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.699842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.700016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.700062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.700326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.700407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.700593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.700645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.700851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.700896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.701113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.701158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.701370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.701417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.701647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.701693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.701912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.701994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.702276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.702392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.702563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.702645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.702950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.703032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.703281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.703346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.703497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.703542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.703737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.703819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.704142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.704225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.704449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.704494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.704711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.704757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.704980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.705025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.705220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.705266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.706541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.706602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.706809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.706855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.707031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.707084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.707304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.707358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.707523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.707569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.707812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.707857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.708071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.708126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.708366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.708425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.708617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.708700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.709053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.709136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.709394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.709440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.709649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.709732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.710043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.710125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.710423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.710469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.710754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.710850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.711149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.711247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.711455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.711501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.711727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.711772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.712050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.712132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.712399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.712621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.422 [2024-07-23 09:03:30.712675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.422 qpair failed and we were unable to recover it. 00:50:18.422 [2024-07-23 09:03:30.712942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.713041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.713326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.713372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7a00 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.713535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.713599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.713797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.713888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.714167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.714251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.714513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.714561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.714827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.714911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.715247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.715377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.717304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.717409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.717670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.717717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.717978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.718063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.718356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.718403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.718584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.718640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.718868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.718914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.719187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.719269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.719480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.719527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.719784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.719866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.720170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.720253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.720565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.720611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.720914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.720996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.721397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.721452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.721743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.721825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.722178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.722259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.722542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.722588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.722800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.722903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.723229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.723342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.723523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.723570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.723745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.723791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.723994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.724077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.724379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.724425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.724585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.724640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.724879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.724962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.725286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.725395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.725610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.725661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.725876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.725958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.726240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.726341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.726595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.726640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.726893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.726999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.727206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.727251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.727446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.727492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.727712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.727795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.728109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.728192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.728428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.728474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.728694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.728776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.729107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.729189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.729432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.729478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.729689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.729771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.730103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.730186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.730431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.730478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.730762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.730844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.731053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.731136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.731410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.731455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.731715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.731797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.732083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.732166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.732421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.732466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.732733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.732816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.733017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.733099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.733372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.733419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.733604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.733686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.733948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.734031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.734341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.734387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.735961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.736055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.736347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.736395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.736575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.736621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.736806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.423 [2024-07-23 09:03:30.736888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.423 qpair failed and we were unable to recover it. 00:50:18.423 [2024-07-23 09:03:30.737125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.737207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.737441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.737487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.737706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.737751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.737968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.738014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.738238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.738283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.738456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.738501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.738773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.738844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.739126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.739170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.739372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.739425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.739606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.739651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.739857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.739902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.740091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.740136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.740321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.740366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.740554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.740600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.740845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.740890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.741137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.741181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.741366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.741412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.741573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.741618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.741814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.741858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.742093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.742138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.742386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.742431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.742645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.742689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.742970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.743015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.743273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.743328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.743518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.743572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.743880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.743962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.744225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.744305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.744539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.744584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.744820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.744865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.745079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.745123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.745396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.745442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.745671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.745716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.745887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.745932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.746072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.746117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.746289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.746343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.746522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.746567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.746735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.746793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.746996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.747041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.747212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.747256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.747413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.747459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.747657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.747703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.747873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.747917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.748091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.748137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.748339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.748386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.748560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.748606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.748770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.748814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.748959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.749014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.749296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.749374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.749553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.749616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.749887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.749932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.750176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.750221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.750478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.750524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.750704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.750749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.750917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.750962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.751134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.751197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.751414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.751465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.751763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.751829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.752049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.752116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.752349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.752425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.752623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.752695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.752985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.753048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.753261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.753306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.753530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.753575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.753774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.753836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.754098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.754143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.754327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.424 [2024-07-23 09:03:30.754373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.424 qpair failed and we were unable to recover it. 00:50:18.424 [2024-07-23 09:03:30.754577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.754622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.754812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.754857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.755078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.755122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.755403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.755449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.755724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.755788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.756078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.756153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.756434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.756504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.756798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.756861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.757139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.757211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.757474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.757536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.757771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.757835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.758097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.758162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.758324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.758372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.758604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.758650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.758899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.758960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.759252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.759296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.759507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.759590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.759842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.759906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.760137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.760182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.760381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.760454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.760664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.760728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.760927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.760990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.761264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.761322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.761508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.761576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.761867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.761935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.762148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.762194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.762440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.762503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.762778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.762823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.763117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.763180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.763415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.763479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.763713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.763774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.764066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.764127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.764416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.764479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.764638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.764712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.765020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.765065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.765241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.765286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.765497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.765562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.765853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.765899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.766087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.766131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.766409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.766455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.766698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.766742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.766998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.767060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.767297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.767357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.767651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.767715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.767973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.768036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.768325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.768376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.768600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.768645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.768914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.768974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.769207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.769251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.769445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.769491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.769713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.769776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.770044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.770106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.770414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.770478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.770744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.770811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.771077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.771146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.771393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.771464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.771700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.771764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.772039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.772109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.772410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.772472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.772738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.772810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.773107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.773169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.773432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.773495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.773677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.773749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.774035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.774110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.774395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.774458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.774693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.774756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.774946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.425 [2024-07-23 09:03:30.775007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.425 qpair failed and we were unable to recover it. 00:50:18.425 [2024-07-23 09:03:30.775281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.775333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.775544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.775611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.775812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.775876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.776079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.776124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.776281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.776334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.776490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.776534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.776743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.776787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.776927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.776970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.777116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.777160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.777338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.777383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.777546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.777597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.777829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.777874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.778110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.778154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.778391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.778475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.778680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.778724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.779004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.779049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.779333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.779387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.779609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.779653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.779890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.779950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.780192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.780236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.780466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.780528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.780778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.780843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.781127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.781198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.781457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.781522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.781859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.781927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.782159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.782204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.782463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.782526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.782821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.782884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.783140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.783201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.783440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.783501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.783810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.783878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.784185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.784230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.784455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.784518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.784762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.784825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.785035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.785097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.785369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.785419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.785658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.785722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.786017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.786088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.786328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.786384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.786589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.786660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.786956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.787027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.787296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.787353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.787518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.787566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.787804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.787867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.788189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.788233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.788467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.788512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.788765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.788826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.789104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.789165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.789398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.789443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.789613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.789678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.789975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.790034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.790279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.790333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.790509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.790573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.790816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.790881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.791061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.791125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.791404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.791475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.791680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.791731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.791919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.791982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.792164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.792210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.792419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.792483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.792772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.792848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.793098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.793143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.793412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.426 [2024-07-23 09:03:30.793477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.426 qpair failed and we were unable to recover it. 00:50:18.426 [2024-07-23 09:03:30.793771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.793845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.794087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.794136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.794392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.794463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.794694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.794757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.795007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.795070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.795384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.795454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.795702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.795749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.796044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.796109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.796412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.796480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.796799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.796907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.797208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.797295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.797534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.797582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.797825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.797923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.798211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.798297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.798535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.798592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.798810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.798893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.799198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.799283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.799567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.799615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.799897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.800007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.800320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.800366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.800553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.800603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.800846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.800931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.801252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.801379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.801582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.801628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.801883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.801967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.802172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.802219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.802442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.802490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.802790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.802875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.803146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.803230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.803429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.803476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.803779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.803864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.805817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.805913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.806194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.806281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.806482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.806529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.808510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.808564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.808877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.808965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.810582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.810639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.810961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.811063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.811323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.811372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.811633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.811680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.811970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.812069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.812331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.812378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.812614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.812661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.812946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.813031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.813291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.813348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.813522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.813569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.813820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.813920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.814234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.814353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.814600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.814646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.814908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.815006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.815304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.815401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.815601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.815647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.815871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.815967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.816229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.816331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.816601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.816647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.816873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.816956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.817227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.817329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.817580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.817627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.817876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.817960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.818249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.818354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.818580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.818626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.818870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.818916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.819193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.819238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.819449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.819496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.819742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.819788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.820047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.820111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.820400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.820447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.820652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.820698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.427 [2024-07-23 09:03:30.820897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.427 [2024-07-23 09:03:30.820981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.427 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.821278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.821385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.821645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.821691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.821946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.821991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.822215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.822262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.822489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.822535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.822743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.822789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.823014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.823060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.823305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.823361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.823586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.823632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.823827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.823874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.824080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.824127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.824367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.824415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.824657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.824703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.824953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.825013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.825292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.825350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.825559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.825614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.825844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.825928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.826228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.826331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.826632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.826716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.827037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.827083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.827395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.827442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.827730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.827777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.828062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.828107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.828328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.828379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.828595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.828642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.828874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.828921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.829162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.829208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.829437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.829484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.829651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.829697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.829946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.829993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.830179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.830225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.830490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.830538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.830741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.830787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.831064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.831111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.831365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.831412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.832912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.833006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.833324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.833373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.833624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.833673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.833930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.833975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.834168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.834217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.834444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.834491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.834745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.834830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.835133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.835216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.835536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.835603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.835955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.836038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.836350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.836422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.836618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.836665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.836991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.837074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.837402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.837448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.839536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.839589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.839911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.839958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.840369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.840438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.840695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.840742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.841082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.841165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.841494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.841542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.841892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.841976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.842328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.842414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.842699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.842783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.843125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.843209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.843535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.843581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.843905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.843990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.844386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.844439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.844719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.844766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.845002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.845086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.845457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.845543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.845798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.845843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.846123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.846206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.846528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.846612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.846946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.847042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.428 qpair failed and we were unable to recover it. 00:50:18.428 [2024-07-23 09:03:30.847390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.428 [2024-07-23 09:03:30.847436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.847718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.847802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.848157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.848242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.848551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.848598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.848919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.849002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.850423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.850475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.850765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.850851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.851241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.851345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.851606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.851664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.851945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.852028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.852397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.852445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.852625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.852923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.853009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.853301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.853413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.853714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.853799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.854110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.854194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.854548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.854621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.854979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.855066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.855407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.855454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.855718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.855801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.856136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.856207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.856524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.856575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.856893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.856976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.857277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.857331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.857533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.857620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.857967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.858049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.858340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.858386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.858598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.858681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.859022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.859106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.859428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.859475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.859781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.859863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.860176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.860259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.860640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.860723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.861069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.861152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.861479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.861525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.861757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.861802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.862155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.862237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.862506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.862553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.862877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.862941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.863304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.863402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.863608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.863707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.864011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.864056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.864407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.864453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.864667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.864749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.865009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.865054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.865341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.865415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.865570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.865655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.866002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.866076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.866435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.866481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.866752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.866836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.867153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.867198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.867506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.867553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.867849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.867932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.868239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.868387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.868604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.868651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.868933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.869015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.869348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.869395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.869594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.869678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.869910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.869993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.870273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.870327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.870533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.870617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.870971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.871064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.871393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.429 [2024-07-23 09:03:30.871454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.429 qpair failed and we were unable to recover it. 00:50:18.429 [2024-07-23 09:03:30.871819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.871902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.872256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.872356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.872681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.872726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.873091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.873173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.873509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.873594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.873938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.874005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.874364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.874448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.874785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.874868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.875213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.875292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.875607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.875714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.876043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.876127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.876458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.876529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.876855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.876939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.877285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.877387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.877676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.877722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.878031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.878114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.878429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.878514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.878854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.878920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.879259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.879357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.879603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.879686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.879995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.880039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.880373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.880456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.880799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.880883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.881177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.881222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.881500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.881584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.881851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.881935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.882210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.882255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.882427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.882473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.882742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.882825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.883158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.883243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.883563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.883610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.883887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.883969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.884616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.884707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.885063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.885148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.885456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.885544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.885839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.885885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.886236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.886334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.886681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.886766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.887076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.887127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.887513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.887597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.887908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.887991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.888349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.888438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.888792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.888876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.889158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.889242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.889523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.889570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.889860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.889943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.890286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.890404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.890753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.890833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.891181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.891265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.891568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.891652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.891966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.892013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.892278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.892379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.892702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.892787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.893087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.893133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.893453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.893516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.893829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.893913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.894194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.894239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.894435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.894481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.894715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.894798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.895135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.895204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.895496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.895542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.895806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.895890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.896232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.896307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.896611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.896694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.897004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.897088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.897417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.897464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.897819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.897902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.898215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.898298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.430 [2024-07-23 09:03:30.898641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.430 [2024-07-23 09:03:30.898687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.430 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.899043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.899126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.899463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.899548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.899853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.899900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.900256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.900353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.900643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.900750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.901097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.901172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.901447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.901531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.901871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.901955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.902293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.902374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.902685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.902779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.903086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.903170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.903440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.903486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.903802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.903885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.904205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.904288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.904611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.904656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.904936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.905019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.905366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.905451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.905769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.905815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.906132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.906215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.906518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.906602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.906897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.906942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.907207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.907290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.907620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.907703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.908054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.908128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.908472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.908559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.908919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.909002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.909255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.909300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.909617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.909700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.910046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.910129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.910392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.910439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.910785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.910868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.911215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.911298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.911645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.911691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.912065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.912149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.912488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.912573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.912888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.912934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.913259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.913356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.913676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.913760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.914111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.914204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.914587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.914679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.914986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.915070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.915408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.915483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.915845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.915928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.916274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.916373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.916722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.916796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.917109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.917193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.917528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.917611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.917967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.918036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.918368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.918454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.918818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.918912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.919246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.919333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.919711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.919793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.920097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.920197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.920481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.920528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.920840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.920901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.921221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.921304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.921632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.921678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.922024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.922107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.922461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.922528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.431 qpair failed and we were unable to recover it. 00:50:18.431 [2024-07-23 09:03:30.922886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.431 [2024-07-23 09:03:30.922947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.923270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.923377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.923704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.923787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.924123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.924194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.924579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.924654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.925002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.925086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.925421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.925484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.925666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.925729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.926034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.926119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.926471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.926559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.926838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.926901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.927239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.927351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.432 [2024-07-23 09:03:30.927645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.432 [2024-07-23 09:03:30.927690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.432 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.927975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.928058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.928408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.928472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.928793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.928872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.929188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.929275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.929650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.929734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.930028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.930074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.930343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.930389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.930571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.930616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.930854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.930900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.931147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.931194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.931368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.931423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.931616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.931662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.931889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.931973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.932278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.932390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.932726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.932815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.933174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.933256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.933560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.933635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.933953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.934038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.934398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.934463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.934737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.934800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.935110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.935182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.935534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.935629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.935930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.936014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.936358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.936424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.936659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.936744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.937035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.701 [2024-07-23 09:03:30.937118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.701 qpair failed and we were unable to recover it. 00:50:18.701 [2024-07-23 09:03:30.937382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.937428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.937687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.937770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.938078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.938160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.938475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.938522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.938863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.938946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.939276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.939393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.939642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.939687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.939942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.940025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.940366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.940429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.940751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.940823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.941167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.941250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.941505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.941567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.941900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.941946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.942226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.942324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.942602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.942685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.942956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.943002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.943370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.943417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.943610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.943693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.944005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.944051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.944406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.944469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.944818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.944901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.945240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.945304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.945613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.945706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.946020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.946103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.946404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.946450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.946800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.946883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.947193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.947276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.947554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.947599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.947894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.947976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.948373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.948437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.948739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.948785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.949120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.949213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.949511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.949592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.950009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.950094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.950423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.950505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.950828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.702 [2024-07-23 09:03:30.950911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.702 qpair failed and we were unable to recover it. 00:50:18.702 [2024-07-23 09:03:30.951215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.951260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.951555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.951652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.951953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.952035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.952343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.952389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.952735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.952820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.953154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.953239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.953470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.953516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.953778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.953861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.954177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.954261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.954556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.954603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.954895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.954978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.955279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.955389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.955705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.955750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.955959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.956042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.956399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.956462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.956736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.956782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.957119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.957203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.957528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.957613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.957957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.958038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.958328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.958411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.958684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.958767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.959077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.959122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.959506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.959570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.959906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.959990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.960346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.960424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.960674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.960757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.961118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.961201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.961555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.961632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.961955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.962038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.962377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.962425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.962733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.962817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.963135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.963218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.963623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.963706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.964020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.964065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.964385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.964449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.703 [2024-07-23 09:03:30.964769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.703 [2024-07-23 09:03:30.964862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.703 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.965173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.965218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.965568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.965636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.965955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.966038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.966386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.966454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.966745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.966828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.967128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.967213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.967518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.967563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.967868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.967952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.968267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.968381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.968676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.968722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.969071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.969154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.969530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.969618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.969964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.970041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.970392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.970456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.970734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.970818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.971153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.971222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.971615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.971714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.972012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.972095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.972376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.972423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.972716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.972798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.973149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.973232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.973588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.973677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.973991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.974073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.974414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.974477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.974743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.974788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.975142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.975226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.975616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.975723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.976064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.976131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.976486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.976550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.976919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.977003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.977342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.977413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.977676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.977759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.978107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.978190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.978570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.978654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.979014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.704 [2024-07-23 09:03:30.979098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.704 qpair failed and we were unable to recover it. 00:50:18.704 [2024-07-23 09:03:30.979442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.979506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.979841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.979887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.980182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.980264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.980644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.980727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.981024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.981075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.981389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.981451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.981668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.981751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.982100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.982190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.982567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.982648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.982978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.983060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.983399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.983466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.983836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.983920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.984219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.984303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.984672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.984741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.985064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.985146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.985468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.985532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.985849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.985921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.986278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.986398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.986688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.986772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.987089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.987134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.987453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.987537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.987885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.987969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.988286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.988339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.988707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.988790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.989108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.989190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.989600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.989684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.990031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.990114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.990469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.990553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.990891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.990957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.991272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.991371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.991691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.991774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.992115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.992179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.992512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.992597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.705 [2024-07-23 09:03:30.992947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.705 [2024-07-23 09:03:30.993029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.705 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.993396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.993488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.993846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.993928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.994248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.994365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.994725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.994815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.995150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.995232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.995609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.995692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.996043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.996127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.996459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.996543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.996897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.996981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.997294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.997347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.997708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.997801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.998150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.998233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.998573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.998639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.999006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.999089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.999438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.999522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:30.999833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:30.999878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.000190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.000273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.000614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.000698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.001048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.001129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.001446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.001531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.001865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.001950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.002262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.002328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.002703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.002787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.003114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.003196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.003567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.003640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.003952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.004035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.004397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.004482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.004795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.004840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.706 [2024-07-23 09:03:31.005210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.706 [2024-07-23 09:03:31.005293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.706 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.005677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.005761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.006100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.006172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.006550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.006636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.006987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.007070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.007383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.007429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.007759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.007842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.008159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.008240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.008596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.008679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.009044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.009128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.009441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.009525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.009837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.009883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.010235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.010331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.010663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.010746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.011066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.011111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.011476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.011560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.011933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.012016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.012358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.012438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.012766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.012849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.013192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.013277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.013635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.013713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.014037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.014119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.014478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.014573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.014873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.014918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.015224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.015306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.015683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.015766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.016102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.016176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.016542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.016627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.016982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.017065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.017374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.017420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.017728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.017811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.018159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.018242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.018599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.018692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.019039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.019122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.019445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.019531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.019839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.019884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.707 [2024-07-23 09:03:31.020256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.707 [2024-07-23 09:03:31.020357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.707 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.020703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.020788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.021125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.021188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.021495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.021542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.021789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.021872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.022205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.022269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.022664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.022749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.023094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.023178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.023532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.023613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.023942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.024025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.024371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.024456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.024795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.024860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.025215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.025298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.025677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.025761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.026097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.026143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.026511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.026596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.026914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.026998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.027340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.027424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.027734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.027817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.028167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.028251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.028628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.028721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.029074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.029179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.029550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.029635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.029972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.030047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.030399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.030484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.030783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.030866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.031207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.031296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.031668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.031751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.032094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.032177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.032477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.032523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.032829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.032911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.033227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.033326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.033643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.033688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.034004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.034087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.034391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.034476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.034824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.034894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.035251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.708 [2024-07-23 09:03:31.035350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.708 qpair failed and we were unable to recover it. 00:50:18.708 [2024-07-23 09:03:31.035663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.035747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.036050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.036095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.036446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.036531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.036850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.036933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.037280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.037374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.037726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.037809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.038095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.038177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.038525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.038602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.038958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.039040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.039356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.039442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.039787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.039868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.040205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.040288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.040637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.040719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.041055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.041124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.041477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.041563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.041904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.041987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.042330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.042419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.042774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.042858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.043215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.043297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.043656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.043702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.044026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.044109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.044434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.044518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.044844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.044889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.045220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.045303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.045605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.045689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.046050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.046139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.046452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.046537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.046897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.046980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.047322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.047401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.047752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.047835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.048145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.048229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.048606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.048652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.048955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.049037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.049390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.709 [2024-07-23 09:03:31.049474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.709 qpair failed and we were unable to recover it. 00:50:18.709 [2024-07-23 09:03:31.049821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.049896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.050207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.050289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.050620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.050703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.051042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.051108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.051461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.051546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.051894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.051977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.052327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.052406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.052746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.052808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.053157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.053243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.053604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.053678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.054046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.054128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.054481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.054565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.054866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.054912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.055252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.055352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.055719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.055826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.056169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.056247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.056625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.056708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.057055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.057137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.057474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.057550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.057876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.057959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.058306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.058401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.058698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.058743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.059058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.059150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.059471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.059556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.059863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.059908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.060259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.060358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.060716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.060799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.061142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.061206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.061560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.061635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.061993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.062076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.062324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.062370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.062599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.062682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.063041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.063124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.063427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.063474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.063841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.063924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.710 qpair failed and we were unable to recover it. 00:50:18.710 [2024-07-23 09:03:31.064268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.710 [2024-07-23 09:03:31.064368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.064720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.064788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.065134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.065216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.065547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.065631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.065976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.066055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.066365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.066449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.066754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.066836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.067140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.067185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.067518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.067565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.067815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.067898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.068154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.068198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.068511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.068596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.068961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.069043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.069387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.069464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.069786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.069869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.070187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.070268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.070625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.070703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.071060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.071143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.071500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.071585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.071936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.072011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.072377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.072463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.072766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.072849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.073140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.073185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.073484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.073568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.073881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.073964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.074302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.074386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.074711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.074793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.075146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.075238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.075606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.075694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.076048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.076131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.076479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.711 [2024-07-23 09:03:31.076565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.711 qpair failed and we were unable to recover it. 00:50:18.711 [2024-07-23 09:03:31.076922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.077007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.077327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.077411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.077749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.077832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.078170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.078231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.078594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.078679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.078999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.079082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.079386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.079432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.079786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.079869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.080220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.080302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.080663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.080733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.081061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.081145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.081574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.081660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.082012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.082113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.082485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.082571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.082888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.082971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.083338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.083436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.083784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.083868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.084213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.084295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.084619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.084665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.085010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.085094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.085414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.085497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.085835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.085881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.086204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.086285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.086674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.086758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.087049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.087093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.087406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.087491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.087835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.087919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.088276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.088383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.088734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.088816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.089161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.089243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.089594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.089678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.090028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.090110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.090467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.090551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.090894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.712 [2024-07-23 09:03:31.090967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.712 qpair failed and we were unable to recover it. 00:50:18.712 [2024-07-23 09:03:31.091360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.091446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.091750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.091833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.092127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.092178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.092496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.092580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.092880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.092963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.093319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.093423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.093746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.093829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.094170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.094254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.094576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.094622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.094963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.095046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.095373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.095457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.095815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.095891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.096207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.096290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.096652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.096734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.097076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.097155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.097512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.097597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.097954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.098036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.098374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.098447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.098795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.098878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.099237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.099361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.099727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.099815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.100136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.100219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.100598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.100682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.100975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.101020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.101367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.101465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.101829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.101912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.102228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.102272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.102624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.102707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.103059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.103142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.103492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.103569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.103917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.103999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.104325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.104409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.104742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.104787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.105107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.105190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.105527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.105610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.105880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.713 [2024-07-23 09:03:31.105926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.713 qpair failed and we were unable to recover it. 00:50:18.713 [2024-07-23 09:03:31.106243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.106348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2553719 Killed "${NVMF_APP[@]}" "$@" 00:50:18.714 [2024-07-23 09:03:31.106639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.106722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:50:18.714 [2024-07-23 09:03:31.107063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.107136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:50:18.714 [2024-07-23 09:03:31.107435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:50:18.714 [2024-07-23 09:03:31.107521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:50:18.714 [2024-07-23 09:03:31.107870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.107954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:18.714 [2024-07-23 09:03:31.108319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.108415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.108777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.108884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.109197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.109280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.109550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.109596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.109824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.109906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.110204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.110288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.110647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.110720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.111045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.111128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.111475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.111560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.111904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.111972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.112297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.112394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.112725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.112809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.113112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.113158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.113440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.113524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.113837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.113919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.114249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.114324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.114680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.114763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.115079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2554404 00:50:18.714 [2024-07-23 09:03:31.115162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:50:18.714 [2024-07-23 09:03:31.115491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.115539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2554404 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 [2024-07-23 09:03:31.115887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.115970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2554404 ']' 00:50:18.714 [2024-07-23 09:03:31.116334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.116419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:18.714 [2024-07-23 09:03:31.116738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:50:18.714 [2024-07-23 09:03:31.116785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:18.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:18.714 [2024-07-23 09:03:31.117099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:50:18.714 [2024-07-23 09:03:31.117184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.714 09:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:18.714 [2024-07-23 09:03:31.117477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.714 [2024-07-23 09:03:31.117562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.714 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.117901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.117963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.118286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.118340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.118571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.118616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.118808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.118855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.119112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.119157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.119363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.119410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.119694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.119739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.119929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.119974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.120237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.120285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.120577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.120623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.120912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.120958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.121187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.121233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.121410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.121457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.121649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.121732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.122015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.122098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.122350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.122397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.122586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.122668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.122948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.123032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.123292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.123397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.123588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.123672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.123928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.124010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.124270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.124381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.124545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.124643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.124896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.124989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.125267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.125383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.125594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.125678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.125959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.126043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.126349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.126419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.126642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.126726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.127019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.127100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.127374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.127420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.127609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.715 [2024-07-23 09:03:31.127692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.715 qpair failed and we were unable to recover it. 00:50:18.715 [2024-07-23 09:03:31.127952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.128034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.128343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.128415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.128621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.128704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.128964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.129048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.129376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.129423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.129658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.129740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.129969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.130052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.130345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.130413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.130647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.130728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.131014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.131119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.131382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.131429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.131626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.131708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.131968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.132050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.132286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.132387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.132545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.132639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.132941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.133023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.133302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.133396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.133599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.133682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.133973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.134057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.134322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.134398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.134559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.134605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.134810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.134892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.135167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.135249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.135525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.135572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.135751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.135833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.136111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.136156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.136350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.136435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.136667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.136751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.137005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.137050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.137237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.137333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.137592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.137676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.137954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.138004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.716 qpair failed and we were unable to recover it. 00:50:18.716 [2024-07-23 09:03:31.138259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.716 [2024-07-23 09:03:31.138357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.138638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.138721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.138964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.139009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.139172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.139255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.139520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.139605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.139854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.139900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.140103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.140185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.140431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.140516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.140774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.140820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.140977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.141022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.141170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.141262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.141556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.141640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.141929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.142012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.142303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.142400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.142611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.142656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.142914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.142997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.143257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.143374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.143623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.143669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.143914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.143996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.144271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.144374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.144672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.144717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.144987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.145071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.145299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.145398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.145632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.145685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.145907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.145989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.146254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.146354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.146618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.146663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.146897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.146980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.147243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.147344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.147730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.147813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.148091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.148174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.148507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.148592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.148937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.149013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.149360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.149445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.149754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.149836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.150130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.150175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.717 qpair failed and we were unable to recover it. 00:50:18.717 [2024-07-23 09:03:31.150412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.717 [2024-07-23 09:03:31.150457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.150689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.150771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.151095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.151182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.151498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.151599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.151946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.152029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.152324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.152425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.152750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.152813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.153033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.153137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.153486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.153586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.153939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.154021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.154348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.154433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.154735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.154779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.155058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.155142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.155446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.155531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.155875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.155961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.156288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.156397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.156690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.156771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.157123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.157169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.157462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.157546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.157868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.157951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.158264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.158318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.158646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.158728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.159036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.159120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.159422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.159501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.159836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.159918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.160254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.160352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.160664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.160710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.161054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.161137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.161463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.161547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.161855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.161901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.162208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.162290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.162651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.162733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.162993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.163038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.163246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.163342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.163651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.718 [2024-07-23 09:03:31.163734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.718 qpair failed and we were unable to recover it. 00:50:18.718 [2024-07-23 09:03:31.163988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.164033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.164331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.164416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.164740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.164824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.165152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.165197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.165466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.165552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.165904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.165988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.166260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.166305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.166558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.166642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.167002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.167097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.167401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.167447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.167655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.167737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.168080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.168163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.168456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.168502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.168743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.168825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.169142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.169223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.169569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.169654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.170005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.170088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.170453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.170499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.170769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.170859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.171167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.171250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.171596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.171679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.171974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.172019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.172347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.172432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.172782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.172866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.173159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.173205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.173491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.173538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.173789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.173835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.174164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.174249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.174594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.174677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.174995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.175432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.175505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.175866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.175949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.176272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.176388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.176745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.176790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.177067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.177223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.719 qpair failed and we were unable to recover it. 00:50:18.719 [2024-07-23 09:03:31.177565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.719 [2024-07-23 09:03:31.177650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.177975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.178038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.178372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.178456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.178769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.178853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.179123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.179169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.179440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.179525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.179830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.179913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.180226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.180271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.180606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.180689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.181036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.181118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.181413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.181459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.181762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.181845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.182130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.182211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.182509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.182561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.182841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.182924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.183205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.183288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.183580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.183626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.183850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.183933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.184253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.184367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.184674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.184720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.185017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.185101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.185460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.185544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.185887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.185966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.186289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.186386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.186692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.186775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.187072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.187117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.187329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.187376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.187668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.187750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.188077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.188170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.188499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.188584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.720 [2024-07-23 09:03:31.188889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.720 [2024-07-23 09:03:31.188972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.720 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.189225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.189270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.189504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.189549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.189866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.189949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.190263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.190319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.190690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.190772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.191071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.191154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.191468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.191514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.191831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.191913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.192180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.192263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.192610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.192656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.192941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.193025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.193354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.193438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.193784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.193863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.194190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.194285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.194620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.194704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.195010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.195055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.195273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.195325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.195687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.195770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.196127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.196210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.196529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.196576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.196879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.196960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.197215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.197260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.197541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.197593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.197952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.198034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.198333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.198379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.198704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.198787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.199092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.199175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.199465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.199511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.199799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.199881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.200223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.200306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.200633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.200679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.200980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.201065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.721 [2024-07-23 09:03:31.201363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.721 [2024-07-23 09:03:31.201471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.721 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.201793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.201838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.202187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.202270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.202617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.202701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.203035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.203081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.203420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.203505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.203817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.203899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.204198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.204243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.204556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.204601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.204900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.204984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.205344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.205410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.205648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.205724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.206077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.206160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.206456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.206502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.206855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.206938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.207243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.207345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.207587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.207633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.207852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.207897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.208087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.208170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.208488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.208535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.208793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.208877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.209188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.209270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.209589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.209635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.209860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.209905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.210177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.210260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.210580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.210625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.210894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.210976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.722 [2024-07-23 09:03:31.211293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.722 [2024-07-23 09:03:31.211394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.722 qpair failed and we were unable to recover it. 00:50:18.993 [2024-07-23 09:03:31.211716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.993 [2024-07-23 09:03:31.211812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.993 qpair failed and we were unable to recover it. 00:50:18.993 [2024-07-23 09:03:31.212116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.993 [2024-07-23 09:03:31.212200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.993 qpair failed and we were unable to recover it. 00:50:18.993 [2024-07-23 09:03:31.212530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.993 [2024-07-23 09:03:31.212625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.993 qpair failed and we were unable to recover it. 00:50:18.993 [2024-07-23 09:03:31.212920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.993 [2024-07-23 09:03:31.212964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.993 qpair failed and we were unable to recover it. 00:50:18.993 [2024-07-23 09:03:31.213207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.993 [2024-07-23 09:03:31.213253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.993 qpair failed and we were unable to recover it. 00:50:18.993 [2024-07-23 09:03:31.213458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.993 [2024-07-23 09:03:31.213504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.993 qpair failed and we were unable to recover it. 00:50:18.993 [2024-07-23 09:03:31.213757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.993 [2024-07-23 09:03:31.213802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.993 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.214002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.214047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.214251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.214296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.214538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.214583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.214832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.214915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.215265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.215365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.215613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.215659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.215930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.216014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.216345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.216429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.216777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.216823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.217115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.217197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.217528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.217613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.217956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.218035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.218389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.218472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.218783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.218867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.219173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.219218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.219484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.219568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.219881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.219964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.220209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.220254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.220493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.220539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.220831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.220914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.221251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.221351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.221715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.221799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.222170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.222254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.222580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.222625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.222948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.223031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.223380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.223465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.223742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.223788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.224030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.224114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.224459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.224544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.224856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.224916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.225201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.225284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.225631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.225715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.226018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.226062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.226368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.226452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.226753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.226836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.227181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.994 [2024-07-23 09:03:31.227235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.994 qpair failed and we were unable to recover it. 00:50:18.994 [2024-07-23 09:03:31.227446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.227492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.227709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.227792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.228077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.228122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.228471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.228517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.228789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.228873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.229162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.229207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.229533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.229579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.229891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.229974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.230272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.230325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.230681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.230764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.231042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.231124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.231400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.231446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.231732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.231815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.232152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.232238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.232587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.232664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.232964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.233047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.233406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.233491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.233765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.233810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.234096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.234178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.234523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.234607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.234870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.234915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.235174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.235257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.235548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.235631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.235954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.236041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.236349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.236433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.236787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.236869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.237173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.237218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.237628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.237712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.237979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.238062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.238365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.238434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.238768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.238852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.239192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.239274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.239606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.239651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.239954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.240037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.240409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.240495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.240800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.995 [2024-07-23 09:03:31.240845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.995 qpair failed and we were unable to recover it. 00:50:18.995 [2024-07-23 09:03:31.241182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.241264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.241558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.241641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.241931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.242008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.242236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.242347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.242703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.242787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.243038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.243082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.243385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.243470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.243815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.243899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.244225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.244326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.244689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.244771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.245119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.245201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.245466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.245512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.245748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.245793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.246090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.246173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.246495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.246541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.246912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.246995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.247332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.247417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.247732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.247777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.248056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.248139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.248467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.248552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.248901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.248987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.249288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.249408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.249722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.249804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.250092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.250137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.250439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.250523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.250820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.250902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.251190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.251235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.251561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.251606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.251907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.251989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.252277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.252331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.252708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.252792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.253090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.253205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.253546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.253593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.253919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.254002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.254355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.254439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.254739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.996 [2024-07-23 09:03:31.254803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.996 qpair failed and we were unable to recover it. 00:50:18.996 [2024-07-23 09:03:31.255151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.255234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.255556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.255639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.255964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.256008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.256223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.256268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.256559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.256643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.256937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.256983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.257367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.257453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.257762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.257857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.258202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.258276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.258650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.258733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.259042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.259125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.259420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.259466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.259843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.259925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.260278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.260376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.260684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.260729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.261028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.261110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.261460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.261544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.261838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.261883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.262198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.262281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.262618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.262688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.262987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.263032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.263336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.263419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.263730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.263814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.264154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.264227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.264543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.264589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.264924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.265007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.265345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.265436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.265790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.265874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.266190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.266272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.266593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.266638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.266999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.267081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.267428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.267511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.267821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.267866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.268094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.268177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.268522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.268605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.268936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.269005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.997 qpair failed and we were unable to recover it. 00:50:18.997 [2024-07-23 09:03:31.269289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.997 [2024-07-23 09:03:31.269399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.269751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.269835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.270149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.270228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.270555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.270602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.270939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.271022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.271326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.271372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.271734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.271818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.272131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.272214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.272545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.272591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.272903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.272985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.273348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.273431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.273744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.273814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.274181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.274265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.274631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.274738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.275042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.275088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.275389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.275474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.275733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.275815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.276130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.276175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.276512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.276558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.276831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.276895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.277187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.277231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.277535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.277620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.277926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.278009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.278370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.278416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.278694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.278739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.279078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.279161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.279461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.279506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.279849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.279932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.280237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.280336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.998 [2024-07-23 09:03:31.280618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.998 [2024-07-23 09:03:31.280664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.998 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.280875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.280957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.281348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.281433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.281730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.281792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.282136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.282220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.282518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.282602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.282868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.282913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.283195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.283277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.283611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.283694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.284045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.284138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.284461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.284546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.284869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.284951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.285280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.285395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.285676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.285757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.286079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.286162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.286464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.286510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.286759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.286842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.287148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.287230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.287550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.287596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.287832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.287914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.288265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.288364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.288634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.288679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.288991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.289074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.289431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.289515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.289800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.289846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.290132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.290215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.290549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.290633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.290910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.290955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.291188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.291270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.291576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.291660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.291964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.292009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.292278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.292374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.292727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.292808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.293116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.293162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.293472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.293519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.293782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.293865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:18.999 [2024-07-23 09:03:31.294174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:18.999 [2024-07-23 09:03:31.294219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:18.999 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.294461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.294546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.294828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.294912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.295170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.295215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.295480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.295564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.295886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.295970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.296271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.296324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.296576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.296658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.296960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.297042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.297391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.297476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.297783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.297866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.298207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.298290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.298601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.298659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.299010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.299104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.299424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.299508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.299856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.299902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.300098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.300182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.300514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.300598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.300943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.301018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.301344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.301452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.301801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.301884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.302185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.302231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.302248] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:50:19.000 [2024-07-23 09:03:31.302433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.302481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.302579] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:19.000 [2024-07-23 09:03:31.302670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.302715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.303045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.303119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.303402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.303494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.303851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.303934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.304239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.304338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.304651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.304734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.305044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.305127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.305451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.305496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.305870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.305953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.306273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.306372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.306661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.306706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.307029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.307112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.000 [2024-07-23 09:03:31.307464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.000 [2024-07-23 09:03:31.307549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.000 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.307901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.307947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.308155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.308200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.308587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.308672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.308986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.309032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.309393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.309476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.309791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.309874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.310185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.310231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.310564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.310610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.310887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.310970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.311285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.311390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.311675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.311758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.312065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.312150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.312476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.312564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.312925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.313008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.313331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.313415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.313754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.313830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.314166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.314251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.314588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.314671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.314968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.315013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.315356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.315442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.315763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.315847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.316147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.316193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.316425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.316527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.316837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.316921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.317251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.317297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.317692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.317776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.318121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.318203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.318488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.318535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.318817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.318900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.319211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.319305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.319612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.319658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.320015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.320099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.320423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.320508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.320772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.320818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.321065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.321149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.001 qpair failed and we were unable to recover it. 00:50:19.001 [2024-07-23 09:03:31.321493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.001 [2024-07-23 09:03:31.321578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.321924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.322003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.322357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.322440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.322788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.322871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.323175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.323247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.323584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.323682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.323990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.324073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.324419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.324466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.324763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.324847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.325191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.325274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.325649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.325745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.326058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.326142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.326489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.326574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.326828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.326911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.327170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.327253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.327602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.327685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.327999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.328045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.328284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.328374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.328720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.328803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.329103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.329148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.329461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.329544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.329907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.329991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.330332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.330410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.330728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.330810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.331120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.331203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.331539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.331624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.331940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.332023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.332296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.332406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.332751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.332796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.333058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.333141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.333493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.333578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.002 qpair failed and we were unable to recover it. 00:50:19.002 [2024-07-23 09:03:31.333865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.002 [2024-07-23 09:03:31.333910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.334240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.334339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.334692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.334776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.335077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.335147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.335470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.335555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.335885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.335967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.336245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.336291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.336555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.336638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.336952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.337035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.337344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.337417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.337671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.337754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.338053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.338135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.338440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.338486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.338755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.338837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.339194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.339276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.339577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.339645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.339966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.340045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.340348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.340432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.340774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.340850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.341174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.341257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.341577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.341660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.341955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.342001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.342301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.342399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.342761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.342843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.343185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.343262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.343630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.343713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.344009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.344092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.344429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.344499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.344797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.344879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.003 qpair failed and we were unable to recover it. 00:50:19.003 [2024-07-23 09:03:31.345191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.003 [2024-07-23 09:03:31.345273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.345610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.345656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.345925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.346007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.346357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.346441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.346778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.346850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.347162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.347244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.347569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.347652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.347950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.347995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.348418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.348506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.348854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.348961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.349260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.349305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.349576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.349659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.350004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.350087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.350402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.350448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.350805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.350898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.351247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.351349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.351650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.351694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.351962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.352044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.352368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.352453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.352790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.352854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.353224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.353344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.353670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.353753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.354082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.354127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.354442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.354526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.354784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.354867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.355118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.355163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.355451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.355533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.355888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.355970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.356223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.356269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.356576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.356659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.357005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.357087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.357390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.357436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.357784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.357867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.358211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.358295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.358655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.358728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.004 [2024-07-23 09:03:31.359083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.004 [2024-07-23 09:03:31.359165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.004 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.359480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.359565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.359862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.359907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.360258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.360371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.360640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.360725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.360974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.361019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.361280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.361383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.361732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.361814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.362147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.362211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.362532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.362578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.362891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.362974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.363276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.363330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.363601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.363684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.363995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.364076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.364420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.364500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.364865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.364949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.365296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.365394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.365686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.365731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.366060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.366142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.366447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.366541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.366899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.366993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.367345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.367428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.367735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.367816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.368167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.368250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.368614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.368698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.369055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.369137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.369420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.369466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.369763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.369845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.370165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.370248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.370557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.370602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.370896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.370978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.371341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.371425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.371722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.371766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.372082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.372165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.372491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.372574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.372912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.005 [2024-07-23 09:03:31.372956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.005 qpair failed and we were unable to recover it. 00:50:19.005 [2024-07-23 09:03:31.373282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.373381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.373720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.373805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.374147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.374251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.374581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.374666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.374972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.375054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.375400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.375481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.375833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.375918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.376227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.376333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.376691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.376765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.377110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.377193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.377503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.377588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.377926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.377994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.378342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.378425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.378671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.378755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.379059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.379104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.379455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.379539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.379851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.379935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.380234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.380279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.380585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.380667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.380978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.381060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.381408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.381501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.381815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.381898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.382241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.382348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.382655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.382707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.383019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.383101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.383412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.383497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.383794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.383838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.384085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.384167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.384521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.384605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.384872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.384917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.385202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.385285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.385656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.385739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.386042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.386086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.386403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.386487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.006 [2024-07-23 09:03:31.386830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.006 [2024-07-23 09:03:31.386912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.006 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.387225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.387270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.387606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.387689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.388046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.388129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.388466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.388543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.388893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.388975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.389348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.389432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.389773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.389849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.390165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.390247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.391286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.391391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.391706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.391783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.392094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.392177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.392520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.392604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.392916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.392962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.393134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.393179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.393396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.393481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.393829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.393915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.394274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.394374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.394686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.394769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.395081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.395126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.395413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.395497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.395842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.395925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.396207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.396253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.396551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.396632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.396980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.397063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.397372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.397418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.397740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.397822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.398185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.398268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.398618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.398689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.399036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.399130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.399451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.399536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.399853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.399925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.400237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.400371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.400692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.400776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.401079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.401124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.007 qpair failed and we were unable to recover it. 00:50:19.007 [2024-07-23 09:03:31.401405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.007 [2024-07-23 09:03:31.401499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.401825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.401908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.402213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.402258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.402615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.402698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.402994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.403077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.403409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.403456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.403781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.403863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.404209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.404292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.404684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.404750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.405022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.405105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.405410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.405493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.405840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.405886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.406251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.406350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.406710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.406794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.407132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.407206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.407529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.407575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.407840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.407885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.408168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.408267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.408597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.408680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.409003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.409087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.409394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.409441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.409684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.409767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.410114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.410196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.410547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.410627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.410997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.411079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.411384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.411468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.411774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.411819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.412189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.412271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.412605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.412689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.413004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.008 [2024-07-23 09:03:31.413049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.008 qpair failed and we were unable to recover it. 00:50:19.008 [2024-07-23 09:03:31.413241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.413287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.413557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.413641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.413986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.414064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.414379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.414464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.414739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.414821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.415105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.415150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.415418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.415503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.415848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.415931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.416254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.416357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.416678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.416762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.417063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.417145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.417489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.417537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.417798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.417881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.418189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.418272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.418629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.418701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.419051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.419134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.419449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.419534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.419837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.419883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.420152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.420235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.420622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.420707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.420974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.421019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.421295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.421394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.421760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.421843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.422187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.422266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.422592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.422676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.423026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.423111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.423395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.423441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.423725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.423808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.424157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.424241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.424590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.424636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.425036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.425120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.425440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.425557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.425842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.425887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.426252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.009 [2024-07-23 09:03:31.426349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.009 qpair failed and we were unable to recover it. 00:50:19.009 [2024-07-23 09:03:31.426668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.426750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.427026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.427072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.427295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.427395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.427718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.427801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.428140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.428215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.428500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.428546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.428818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.428901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.429211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.429257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.429627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.429709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.430062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.430145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.430454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.430501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.430798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.430882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.431196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.431280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.431592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.431637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.431970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.432052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.432326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.432410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.432682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.432728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.433030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.433114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.433435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.433520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.433863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.433909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.434222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.434305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.434680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.434764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.435053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.435098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.435439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.435525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.435844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.435927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.436270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.436324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.436688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.436771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.437076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.437158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.437501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.437577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.437941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.438025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.438338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.438421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.438717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.438762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.439051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.439133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.439481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.439567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.439904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.439973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.010 [2024-07-23 09:03:31.440342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.010 [2024-07-23 09:03:31.440428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.010 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.440735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.440818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.441134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.441185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.441532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.441579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.441906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.441990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.442294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.442368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.442698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.442782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.443128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.443211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.443518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.443565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.443908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.443991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.444367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.444453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.444763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.444809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.445077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.445159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.445469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.445554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.445911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.445980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.446282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.446380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.446744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.446828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.447133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.447178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.447565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.447651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.448009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.448093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.448435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.448512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.448846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.448930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.449286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.449386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.449658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.449703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.449979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.450061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.450433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.450518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.450791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.450850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.451123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.451206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.451499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.451582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.451939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.452013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.452381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.452466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.452807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.452889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.453206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.453251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.453646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.453731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.454032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.454114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.454457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.454539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.011 [2024-07-23 09:03:31.454896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.011 [2024-07-23 09:03:31.454979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.011 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.455362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.455449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.455790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.455863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.456206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.456289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.456653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.456736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.457065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.457111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.457457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.457551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.457899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.457982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.458280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.458335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.458644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.458726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.459088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.459171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.459519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.459593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.459920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.460003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.460339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.460424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.460775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.460853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.461153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.461215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.461485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.461549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.461858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.461920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.462148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.462211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.462532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.462597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.462878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.462923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.463179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.463242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.463527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.463590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.463858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.463903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.464216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.464298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.464635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.464717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.465008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.465053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.465392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.465456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.465752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.465834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.466175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.466239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.466550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.466596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.466879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.466961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.467295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.467370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.467646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.467728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.468048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.468131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.468427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.468473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.468763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.012 [2024-07-23 09:03:31.468846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.012 qpair failed and we were unable to recover it. 00:50:19.012 [2024-07-23 09:03:31.469192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.469275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.469596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.469641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.469982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.470064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.470407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.470469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.470742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.470788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.471063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.471146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.471468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.471531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.471841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.471886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.472239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.472334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.472627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.472719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.472996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.473042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.473297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.473398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.473686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.473767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.474119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.474202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.474558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.474623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.474967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.475049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.475385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.475454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.475789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.475895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.476184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.476267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.476599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.476646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.476985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.477067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.477390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.477453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.477774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.477852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.478218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.478300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.478558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.478658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.478923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.478969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.479273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.479383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.479701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.479784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.480125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.480188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.480564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.480610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.013 [2024-07-23 09:03:31.480947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.013 [2024-07-23 09:03:31.481030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.013 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.481306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.481361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.481666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.481749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.482041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.482125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.482466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.482512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.482888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.482971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.483291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.483403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.483661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.483707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.484024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.484107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.484461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.484524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.484824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.484869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.485170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.485252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.485626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.485710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.486060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.486152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.486479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.486542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.486897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.486979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.487329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.487399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.487732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.487814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.488127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.488209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.488529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.488581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.488881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.488963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.489306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.489428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.489708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.489753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.490098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.490181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.490504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.490588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.490907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.490952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.491273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.491372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.491716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.491799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.492150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.492246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.492608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.492696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.493048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.493143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.493485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.493561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.493865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.493949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.494318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.494402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.494691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.494736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.495051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.495133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.495492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.014 [2024-07-23 09:03:31.495576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.014 qpair failed and we were unable to recover it. 00:50:19.014 [2024-07-23 09:03:31.495830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.495875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.496174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.496256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.496619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.496703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.497016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.497061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.497381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.497465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.497757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.497840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.498143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.498189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.498454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.498536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.498879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.498961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.499269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.499323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.499525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.499602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.499949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.500032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.500372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.500445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.500731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.500815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.501162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.501277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.501646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.501692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.501933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.502015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.502340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.502410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.502637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.502683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.502975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.503057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.015 [2024-07-23 09:03:31.503376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.015 [2024-07-23 09:03:31.503460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.015 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.503797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.503871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.504240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.504347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.504701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.504784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.505085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.505129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.505361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.505429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.505714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.505797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.506082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.506127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.506347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.506431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.506741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.506824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.507127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.507173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.507434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.507519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.507780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.507862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.508158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.508203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.508555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.508626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.508964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.509046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.509350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.509411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.509782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.509865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.510167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.510250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.510558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.510604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.510901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.510985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.511292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.511390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.511730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.511794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.512071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.512155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.512500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.512585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.512882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.512928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.513227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.513337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.513647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.513731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.514035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.514081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.514398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.514482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.514822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.514905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.515217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.515263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.515604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.286 [2024-07-23 09:03:31.515687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.286 qpair failed and we were unable to recover it. 00:50:19.286 [2024-07-23 09:03:31.515996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.516080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.516388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.516434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.516740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.516824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.517193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.517276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.517596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.517641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.517984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.518066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.518382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.518428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.518714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.518814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.519134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.519218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 EAL: No free 2048 kB hugepages reported on node 1 00:50:19.287 [2024-07-23 09:03:31.519585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.519680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.519961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.520006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.520275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.520374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.520723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.520805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.521101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.521147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.521479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.521564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.521878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.521961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.522316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.522389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.522687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.522770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.523073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.523156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.523460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.523506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.523805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.523888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.524241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.524336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.524680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.524757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.525119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.525202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.525602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.525687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.525956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.526016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.526300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.526404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.526683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.526765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.527061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.527106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.527364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.527450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.527803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.527886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.528194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.528239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.528514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.528560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.528888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.528972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.529325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.287 [2024-07-23 09:03:31.529409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.287 qpair failed and we were unable to recover it. 00:50:19.287 [2024-07-23 09:03:31.529702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.529748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.530046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.530129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.530476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.530550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.530863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.530945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.531195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.531278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.531614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.531659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.532011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.532093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.532445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.532529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.532783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.532829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.533139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.533221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.533550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.533615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.533948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.534032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.534368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.534441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.534716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.534801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.535143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.535194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.535561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.535635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.535984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.536067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.536419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.536515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.536823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.536906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.537256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.537354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.537665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.537711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.537965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.538048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.538408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.538492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.538822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.538912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.539214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.539296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.539664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.539748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.540058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.540104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.540429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.540514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.540828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.540912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.541205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.541250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.541560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.541607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.541913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.541996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.542346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.542422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.542741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.542842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.543186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.543269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.288 qpair failed and we were unable to recover it. 00:50:19.288 [2024-07-23 09:03:31.543581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.288 [2024-07-23 09:03:31.543626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.543931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.544012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.544356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.544441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.544733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.544779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.545071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.545153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.545464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.545547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.545852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.545896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.546221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.546301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.546557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.546604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.546932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.547006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.547364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.547449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.547799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.547882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.548176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.548222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.548546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.548592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.548882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.548966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.549267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.549322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.549618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.549701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.550005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.550089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.550441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.550524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.550885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.551004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.551303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.551416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.551731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.551776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.552142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.552225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.552561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.552636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.552978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.553056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.553414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.553474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.553725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.553809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.554157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.554242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.554570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.554616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.554978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.555060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.555401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.555448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.555699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.555782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.556130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.556212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.556545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.556591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.556956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.289 [2024-07-23 09:03:31.557041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.289 qpair failed and we were unable to recover it. 00:50:19.289 [2024-07-23 09:03:31.557390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.557474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.557813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.557887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.558254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.558354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.558699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.558785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.559086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.559131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.559482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.559567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.559874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.559957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.560254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.560299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.560626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.560709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.561029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.561110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.561456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.561553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.561869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.561953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.562296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.562391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.562728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.562774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.563026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.563109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.563413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.563497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.563829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.563899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.564247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.564344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.564706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.564788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.565135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.565203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.565581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.565675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.565989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.566072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.566376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.566421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.566715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.566798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.567142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.567235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.567601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.567695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.568057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.568140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.568494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.568580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.568891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.568936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.569260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.569359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.569620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.569701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.290 qpair failed and we were unable to recover it. 00:50:19.290 [2024-07-23 09:03:31.570001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.290 [2024-07-23 09:03:31.570046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.570324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.570407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.570750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.570832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.571124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.571169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.571435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.571518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.571824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.571906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.572179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.572224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.572544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.572616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.572975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.573057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.573408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.573483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.573842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.573926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.574237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.574335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.574684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.574770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.575115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.575198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.575528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.575611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.575957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.576038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.576408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.576492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.576849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.577005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.577326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.577372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.577709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.577791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.578161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.578245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.578550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.578597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.578947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.579030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.579344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.579429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.579771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.579846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.580155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.580239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.580571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.580656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.580991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.581037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.581398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.581482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.581801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.581884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.582234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.582305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.291 qpair failed and we were unable to recover it. 00:50:19.291 [2024-07-23 09:03:31.582616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.291 [2024-07-23 09:03:31.582700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.586351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.586458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.586830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.586936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.587328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.587422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.587792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.587882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.588211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.588259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.588625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.588711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.589090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.589173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.589536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.589618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.589984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.590071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.590422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.590507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.590862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.590946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.591271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.591371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.591698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.591781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.593366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.593458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.593802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.593889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.594245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.594343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.594732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.594845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.595274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.595411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.595840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.595954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.596302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.596402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.596788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.596900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.597453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.597568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.597916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.598017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.598411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.598525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.598926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.599039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.599392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.599456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.599877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.599989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.600436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.600562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.600983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.601095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.601480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.601597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.602015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.602130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.602490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.602553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.602930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.603044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.603410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.603526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.603893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.292 [2024-07-23 09:03:31.603977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.292 qpair failed and we were unable to recover it. 00:50:19.292 [2024-07-23 09:03:31.604392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.604509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.604925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.605037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.605420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.605528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.605877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.605989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.606384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.606494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.606839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.606902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.607356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.607482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.607903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.608014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.608421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.608928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.609039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.609428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.609542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.609932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.610052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.610445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.610560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.611049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.611162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.611606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.611747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.612174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.612285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.612687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.612797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.613154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.613243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.613717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.613830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.614255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.614391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.614819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.614944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.615342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.615456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.615859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.615973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.616365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.616483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.616913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.617028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.617412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.617525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.617873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.617958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.618355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.618469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.618816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.618929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.619325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.619439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.619811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.619921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.620302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.620437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.620825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.620939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.621403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.621519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.293 [2024-07-23 09:03:31.622003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.293 [2024-07-23 09:03:31.622116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.293 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.622517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.622648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.623024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.623135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.623473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.623587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.623906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.623968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.624339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.624453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.624877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.624991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.625409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.625525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.625916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.626028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.626413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.626525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.626921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.627035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.627455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.627569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.627980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.628104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.628470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.628562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.628993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.629106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.629502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.629616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.630014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.630134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.630573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.630687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.631112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.631224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.631659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.631776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.632188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.632304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.632748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.632860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.633231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.633352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.633737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.633849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.634273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.634411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.634757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.634818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.635230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.635364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.635761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.635876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.636238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.636361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.636757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.636871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.637246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.637381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.637741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.637802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.638101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.638214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.638618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.638728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.639053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.639114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.639377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.294 [2024-07-23 09:03:31.639492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.294 qpair failed and we were unable to recover it. 00:50:19.294 [2024-07-23 09:03:31.639912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.640021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.640374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.640462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.640869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.640983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.641420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.641536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.641925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.642012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.642435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.642550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.642974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.643086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.643412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.643474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.643910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.644048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.644485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.644599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.644997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.645122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.645459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.645571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.645968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.646079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.646445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.646539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.646964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.647077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.647511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.647624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.647965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.648091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.648516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.648634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.649061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.649173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.649542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.649630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.650005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.650115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.650548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.650660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.651060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.651184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.651625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.651740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.652155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.652266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.652693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.652819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.653210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.653336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.653683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.653793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.654208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.654270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.654725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.654837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.655202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.655346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.655739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.295 [2024-07-23 09:03:31.655850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.295 qpair failed and we were unable to recover it. 00:50:19.295 [2024-07-23 09:03:31.656265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.656400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.656774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.656887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.657268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.657403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.657827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.657941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.658327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.658440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.658779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.658877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.659299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.659432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.659819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.659932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.660268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.660397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.660784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.660895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.661267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.661415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.661773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.661837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.662239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.662373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.662804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.662916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.662972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:50:19.296 [2024-07-23 09:03:31.663265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.663370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.663754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.663866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.664282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.664417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.664819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.664941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.665370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.665497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.665927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.666038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.666373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.666437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.666760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.666873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.667215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.667344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.667746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.667808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.668077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.668190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.668605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.668718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.669119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.669244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.669710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.669823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.670209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.670347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.670676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.670758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.296 [2024-07-23 09:03:31.671176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.296 [2024-07-23 09:03:31.671287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.296 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.671701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.671813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.672203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.672337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.672768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.672881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.673327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.673443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.673793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.673919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.674300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.674436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.674759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.674884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.675245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.675319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.675580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.675693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.676072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.676212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.676597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.676699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.677118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.677230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.677639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.677756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.678128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.678191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.678523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.678639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.679032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.679150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.679559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.679673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.680035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.680099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.680421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.680548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.680917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.681029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.681451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.681567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.681942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.682068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.682489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.682605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.683028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.683140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.683531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.683645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.684042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.684156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.684542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.684604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.684844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.684905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.685279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.685408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.685772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.685898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.686289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.686427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.686838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.686949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.687374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.687488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.297 [2024-07-23 09:03:31.687888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.297 [2024-07-23 09:03:31.687990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.297 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.688420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.688533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.688957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.689069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.689460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.689574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.689937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.689999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.690389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.690504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.690883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.690997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.691386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.691502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.691896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.691999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.692393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.692506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.692889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.693002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.693423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.693538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.693941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.694003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.694456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.694584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.694970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.695082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.695477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.695591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.695969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.696105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.696528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.696647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.697052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.697168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.697624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.697742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.698091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.698196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.698607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.698729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.699127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.699242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.699685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.699798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.700162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.700251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.700690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.700804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.701220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.701760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.701846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.702099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.702144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.702358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.702445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.702803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.702887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.703182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.703266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.703632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.703678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.703931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.704014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.704334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.704419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.704781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.704866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.298 qpair failed and we were unable to recover it. 00:50:19.298 [2024-07-23 09:03:31.705165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.298 [2024-07-23 09:03:31.705211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.705577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.705623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.705885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.705969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.706354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.706427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.706707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.706828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.707177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.707265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.707608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.707694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.708011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.708095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.708431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.708505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.708861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.708945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.709253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.709356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.709688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.709771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.710103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.710191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.710548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.710596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.710841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.710925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.711232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.711337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.711645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.711691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.711961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.712055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.712380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.712427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.712725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.712808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.713143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.713213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.713627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.713712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.714094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.714219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.714661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.714784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.715136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.715203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.715656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.715777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.716165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.716286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 A controller has encountered a failure and is being reset. 00:50:19.299 [2024-07-23 09:03:31.716729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.716796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.717109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.717158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.717367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.717439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.717690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.717737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.718026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.718095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.718363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.718423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.718715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.718776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.719085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.299 [2024-07-23 09:03:31.719133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.299 qpair failed and we were unable to recover it. 00:50:19.299 [2024-07-23 09:03:31.719364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.719409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.719642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.719687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.719966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.720050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.720266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.720319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.720569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.720615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.720868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.720936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.721225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.721289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.721574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.721619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.721801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.721864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.722154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.722216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.722445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.722492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.722759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.722841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.723106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.723183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.723395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.723461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.723742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.723811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.724106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.724163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.724428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.724492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.724737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.724783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.725075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.725158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.725449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.725513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.725791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.725856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.726145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.726219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.726514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.726590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.726846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.726911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.727138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.727184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.727469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.727545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.727836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.727901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.728088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.728132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.728350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.728412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.728656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.728719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.728964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.729030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.729286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.729348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.729613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.729674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.300 qpair failed and we were unable to recover it. 00:50:19.300 [2024-07-23 09:03:31.729964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.300 [2024-07-23 09:03:31.730037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.730327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.730374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.730607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.730653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.730905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.731149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.731194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.731466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.731512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.731776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.731839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.732058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.732121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.732416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.732485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.732774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.732852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.733134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.733199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.733425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.733471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.733715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.733778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.734046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.734109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.734341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.734387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.734701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.734771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.735053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.735099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.735394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.735439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.735653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.735717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.735957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.736020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.736252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.736297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.736544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.736588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.736874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.736955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.737245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.737291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.737503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.737548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.737804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.737868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.738157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.738234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.738477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.738527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.738816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.738864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.739093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.739177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.739444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.739508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.301 [2024-07-23 09:03:31.739790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.301 [2024-07-23 09:03:31.739852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.301 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.740148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.740217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.740533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.740600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.740886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.740962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.741232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.741276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.741582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.741674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.741917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.741981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.742257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.742301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.742559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.742603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.742854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.742917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.743163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.743225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.743531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.743577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.743796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.743856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.744136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.744205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.744488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.744558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.744846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.744909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.745195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.745261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.745481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.745549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.745802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.745848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.746152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.746228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.746512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.746576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.746811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.746875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.747134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.747196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.747489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.747538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.747821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.747885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.748165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.748234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.748547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.748623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.748855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.748919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.749150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.749195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.749481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.749544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.749837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.749887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.750141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.750187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.750408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.750473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.750682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.750744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.751026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.751095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.751329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.302 [2024-07-23 09:03:31.751380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.302 qpair failed and we were unable to recover it. 00:50:19.302 [2024-07-23 09:03:31.751658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.751721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.752012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.752083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.752328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.752378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.752663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.752707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.752919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.752982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.753215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.753278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.753576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.753645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.753923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.753985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.754227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.754273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.754562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.754607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.754855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.754918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.755196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.755258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.755558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.755603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.755886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.755955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.756227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.756272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.756557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.756603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.756909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.756974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.757215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.757261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.757515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.757560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.757852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.757932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.758223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.758295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.758622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.758668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.758935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.758998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.759235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.759279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.759495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.759541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.759815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.759880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.760193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.760253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.760505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.760550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.760843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.760911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.761217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.761279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.761587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.761664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.761894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.761956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.762232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.762275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.762565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.762613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.762895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.303 [2024-07-23 09:03:31.762964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.303 qpair failed and we were unable to recover it. 00:50:19.303 [2024-07-23 09:03:31.763174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.763218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.763466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.763511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.763795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.763859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.764109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.764169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.764428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.764505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.764756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.764827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.765143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.765189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.765479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.765549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.765746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.765808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.766027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.766089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.766348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.766394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.766681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.766727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.766986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.767048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.767279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.767334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.767615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.767661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.767959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.768031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.768321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.768375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.768682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.768745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.769038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.769101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.769378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.769424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.769697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.769741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.770043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.770116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.770382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.770429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.770678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.770741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.771001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.771064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.771330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.771375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.771588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.771633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.771914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.771984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.772213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.772259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.772549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.772595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.772849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.772911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.773161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.773224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.773477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.773522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.773806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.773874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.774181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.774243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.774511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.304 [2024-07-23 09:03:31.774574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.304 qpair failed and we were unable to recover it. 00:50:19.304 [2024-07-23 09:03:31.774868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.774942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.775204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.775249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.775520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.775566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.775850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.775916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.776192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.776255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.776519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.776591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.776908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.776954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.777241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.777286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.777536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.777580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.777836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.777898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.778184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.778250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.778507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.778559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.778839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.778912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.779204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.779270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.779584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.779653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.779933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.779995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.780188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.780231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.780502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.780576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.780790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.780852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.781091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.781168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.781442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.781506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.781793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.781856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.782128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.782192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.782501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.782548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.782800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.782864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.783175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.783251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.783576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.783622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.783873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.783934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.784177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.784222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.784459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.784537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.784833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.305 [2024-07-23 09:03:31.784903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.305 qpair failed and we were unable to recover it. 00:50:19.305 [2024-07-23 09:03:31.785186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.785230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.785495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.785557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.785851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.785924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.786201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.786246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.786561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.786644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.786908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.786953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.787223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.787268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.787586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.787653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.787940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.788004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.788269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.788325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.788627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.788693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.788981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.789044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.789279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.789335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.789602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.789666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.789963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.790034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.790265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.790320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.790573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.790642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.790940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.791015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.791244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.791290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.791543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.791611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.791879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.791946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.792219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.792264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.792496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.792560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.792853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.792922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.793206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.793274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.793508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.793552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.793828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.793895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.794179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.794242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.794532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.794582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.794885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.794955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.795193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.795242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.795528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.795574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.795783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.306 [2024-07-23 09:03:31.795851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.306 qpair failed and we were unable to recover it. 00:50:19.306 [2024-07-23 09:03:31.796135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.307 [2024-07-23 09:03:31.796205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.307 qpair failed and we were unable to recover it. 00:50:19.307 [2024-07-23 09:03:31.796498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.307 [2024-07-23 09:03:31.796575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.307 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.796889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.580 [2024-07-23 09:03:31.796935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.580 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.797216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.580 [2024-07-23 09:03:31.797260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.580 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.797523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.580 [2024-07-23 09:03:31.797588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.580 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.797872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.580 [2024-07-23 09:03:31.797939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.580 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.798209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.580 [2024-07-23 09:03:31.798254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.580 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.798534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.580 [2024-07-23 09:03:31.798588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.580 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.798873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.580 [2024-07-23 09:03:31.798943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.580 qpair failed and we were unable to recover it. 00:50:19.580 [2024-07-23 09:03:31.799219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.799267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.799471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.799517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.799810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.799876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.800159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.800225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.800443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.800488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.800803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.800848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.801129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.801199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.801494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.801564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.801846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.801912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.802125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.802169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.802351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.802411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.802706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.802779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.803075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.803121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.803357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.803402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.803608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.803672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.803956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.804025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.804290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.804356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.804629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.804674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.804963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.805035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.805270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.805325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.805604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.805649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.805937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.806008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.806279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.806342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.806627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.806671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.806927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.806974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.807208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.807272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.807559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.807604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.807880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.807940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.808191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.808254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.808536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.808582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.808836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.808900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.809178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.809241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.809550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.809633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.809924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.809969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.810234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.810278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.810528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.581 [2024-07-23 09:03:31.810572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.581 qpair failed and we were unable to recover it. 00:50:19.581 [2024-07-23 09:03:31.810870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.810916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.811157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.811222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.811516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.811578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.811769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.811833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.812122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.812195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.812447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.812517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.812813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.812879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.813158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.813228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.813469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.813532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.813829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.813902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.814156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.814219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.814462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.814526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.814819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.814881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.815168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.815232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.815506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.815571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.815878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.815941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.816167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.816211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.816495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.816561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.816861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.816928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.817163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.817207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.817483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.817545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.817842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.817906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.818183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.818244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.818497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.818560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.818840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.818901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.819186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.819261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.819552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.819628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.819866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.819926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.820145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.820207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.820476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.820539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.820798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.820860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.821115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.821178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.821474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.821521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.821795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.821861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.822145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.822214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.822513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.582 [2024-07-23 09:03:31.822559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.582 qpair failed and we were unable to recover it. 00:50:19.582 [2024-07-23 09:03:31.822828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.822890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.823140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.823184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.823460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.823545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.823811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.823872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.824179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.824223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.824490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.824553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.824773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.824837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.825074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.825139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.825426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.825495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.825779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.825843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.826125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.826191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.826398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.826465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.826752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.826817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.827114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.827188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.827473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.827541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.827840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.827887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.828090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.828135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.828385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.828431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.828723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.828793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.829081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.829153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.829449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.829495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.829806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.829874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.830156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.830202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.830485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.830555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.830850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.830920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.831190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.831235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.831454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.831522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.831822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.831894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.832190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.832236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.832542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.832612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.832834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.832896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.833175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.833237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.833445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.833510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.833792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.833857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.834156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.834201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.834444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.583 [2024-07-23 09:03:31.834509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.583 qpair failed and we were unable to recover it. 00:50:19.583 [2024-07-23 09:03:31.834792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.834853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.835086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.835148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.835424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.835487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.835738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.835800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.836084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.836146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.836437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.836499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.836731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.836794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.837089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.837164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.837391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.837469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.837729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.837794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.838007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.838068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.838349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.838394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.838656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.838717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.839028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.839081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.839368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.839414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.839662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.839724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.840006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.840068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.840341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.840392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.840674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.840743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.841020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.841082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.841323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.841368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.841652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.841697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.841937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.842001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.842270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.842323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.842562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.842606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.842855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.842916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.843199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.843279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.843562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.843606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.843814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.843876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.844165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.844240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.844521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.844580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.844867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.844938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.845215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.845279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.845564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.584 [2024-07-23 09:03:31.845608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.584 qpair failed and we were unable to recover it. 00:50:19.584 [2024-07-23 09:03:31.845909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.845956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.846227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.846271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.846521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.846568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.846818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.846880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.847150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.847211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.847444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.847489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.847721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.847784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.848071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.848139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.848411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.848475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.848765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.848837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.849083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.849145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.849390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.849457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.849741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.849810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.850091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.850151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.850397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.850464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.850722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.850768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.851059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.851122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.851378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.851443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.851688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.851750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.852052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.852127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.852369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.852432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.852710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.852772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.853085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.853156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.853419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.853512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.853820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.853903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.854183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.854229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.854484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.854550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.854806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.854870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.855145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.855207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.585 [2024-07-23 09:03:31.855495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.585 [2024-07-23 09:03:31.855559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.585 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.855846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.855911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.856150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.856194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.856481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.856558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.856845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.856908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.857178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.857223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.857468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.857530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.857817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.857897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.858212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.858280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.858584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.858661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.858944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.859013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.859287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.859342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.859589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.859651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.859924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.859987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.860256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.860300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.860568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.860636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.860916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.860984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.861218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.861262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.861560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.861636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.861879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.861943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.862228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.862293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.862556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.862623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.862905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.862969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.863249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.863325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.863619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.863683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.863966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.864036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.864304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.864358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.864620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.864683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.864946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.865009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.865289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.865342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.865686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.865748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.866035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.866113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.866409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.866479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.866750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.866795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.867093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.867178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.867448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.586 [2024-07-23 09:03:31.867497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.586 qpair failed and we were unable to recover it. 00:50:19.586 [2024-07-23 09:03:31.867809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.867877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.868160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.868228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.868503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.868548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.868833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.868901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.869184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.869252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.869488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.869533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.869791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.869853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.870102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.870162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.870398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.870465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.870752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.870821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.871107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.871169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.871459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.871528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.871825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.871897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.872134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.872197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.872448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.872512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.872749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.872813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.873089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.873149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.873442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.873514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.873798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.873870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.874102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.874165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.874473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.874522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.874798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.874860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.875144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.875209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.875488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.875559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.875838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.875908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.876197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.876242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.876493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.876555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.876829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.876893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.877181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.877244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.877491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.877554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.877835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.877902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.878132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.878194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.878498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.878564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.878853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.878918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.879190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.879234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.587 [2024-07-23 09:03:31.879491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.587 [2024-07-23 09:03:31.879555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.587 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.879799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.879862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.880165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.880238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.880536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.880606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.880845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.880907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.881200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.881273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.881558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.881635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.881868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.881938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.882157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.882201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.882441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.882506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.882787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.882849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.883040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.883102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.883370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.883416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.883660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.883723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.884007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.884076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.884269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.884322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.884554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.884624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.884938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.885004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.885273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.885326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.885604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.885648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.885933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.886002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.886254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.886297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.886595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.886640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.886939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.887021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.887236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.887300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.887507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.887552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.887814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.887878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.888183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.888249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.888542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.888587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.888812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.888881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.889136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.889200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.889433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.889480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.889708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.889753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.890044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.890117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.890399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.890444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.891721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.891781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.588 [2024-07-23 09:03:31.892093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.588 [2024-07-23 09:03:31.892140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.588 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.892437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.892509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.892731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.892807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.893038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.893113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.893354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.893400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.893663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.893727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.893969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.894016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.894297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.894386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.894632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.894694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.894948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.895010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.895255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.895303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.895491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.895536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.895787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.895850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.896164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.896241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.896485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.896530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.896780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.896844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.897136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.897207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.897447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.897510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.897762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.897825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.898116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.898185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.898456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.898520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.898805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.898859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.899121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.899184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.899461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.899509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.899747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.899809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.899991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.900054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.900264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.900329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.900546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.900616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.900861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.900923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.901119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.901163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.901392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.901460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.901730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.901795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.902040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.902105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.902370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.902415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.904046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.904100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.904374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.904421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.904682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.589 [2024-07-23 09:03:31.904745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.589 qpair failed and we were unable to recover it. 00:50:19.589 [2024-07-23 09:03:31.905015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.905078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.905306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.905381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.905596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.905643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.905908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.905970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.906220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.906266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.906484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.906529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.906778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.906841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.907067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.907135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.907417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.907480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.907744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.907791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.908009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.908079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.908304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.908368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.908612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.908678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.908964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.909029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.909281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.909333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.909550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.909610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.909799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.909877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.910132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.910194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.910409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.910455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.910742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.910808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.911109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.911164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.912959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.913012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.913316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.913362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.913669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.913748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.914034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.914080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.914370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.914415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.914632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.914696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.914985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.915048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.915287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.915344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.915571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.915628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.590 qpair failed and we were unable to recover it. 00:50:19.590 [2024-07-23 09:03:31.915892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.590 [2024-07-23 09:03:31.915955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.916181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.916242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.916457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.916503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.916778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.916841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.917132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.917203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.917453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.917498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.917767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.917829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.918141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.918188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.918441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.918510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.918773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.918838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.919077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.919140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.919441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.919488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.919724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.919787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.920062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.920109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.920353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.920399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.920648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.920714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.920981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.921045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.921326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.921372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.921613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.921658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.921944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.922007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.922305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.922368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.922615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.922660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.922870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.922942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.923174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.923238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.923444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.923489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.923807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.923887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.924160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.924206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.924473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.924537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.925500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.925552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.925844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.925907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.926188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.926232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.926544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.926638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.926892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.926954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.927223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.927269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.927580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.927652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.927916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.927963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.591 qpair failed and we were unable to recover it. 00:50:19.591 [2024-07-23 09:03:31.928236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.591 [2024-07-23 09:03:31.928282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.928533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.928601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.928878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.928942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.929191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.929260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.929502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.929548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.929847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.929893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.930183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.930258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.930529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.930596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.930893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.930940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.931151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.931195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.931433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.931498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.931758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.931822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.932049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.932110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.932365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.932424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.932680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.932742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.932953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.933026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.933230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.933288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.933520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.933586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.933854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.933918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.934187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.934248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.934465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.934530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.934782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.934845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.935145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.935190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.935350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.935395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.935566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.935641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.935864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.935925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.936159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.936204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.936440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.936503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.936779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.936842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.937126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.937193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.937441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.937505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.937743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.937806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.938037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.938112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.938372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.938417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.938579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.938644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.938910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.592 [2024-07-23 09:03:31.938955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.592 qpair failed and we were unable to recover it. 00:50:19.592 [2024-07-23 09:03:31.939155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.939201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.939422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.939487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.939740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.939806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.940028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.940090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.940366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.940411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.940628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.940690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.940979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.941055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.941264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.941315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.941517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.941594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.941879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.941944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.942180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.942243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.942457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.942521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.942814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.942883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.943177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.943237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.943496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.943563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.943825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.943889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.944137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.944212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.944422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.944487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.944722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.944785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.945074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.945139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.945336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.945381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.945548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.945626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.945855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.945920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.946126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.946171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.946386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.946455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.946690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.946754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.947001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.947063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.947295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.947348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.947513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.947590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.947763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.947828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.949779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.949833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.950137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.950204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.950445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.950510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.950682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.950747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.950967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.593 [2024-07-23 09:03:31.951040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.593 qpair failed and we were unable to recover it. 00:50:19.593 [2024-07-23 09:03:31.951247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.951292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.951498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.951561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.951800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.951863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.952129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.952175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.952421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.952486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.952720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.952791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.952986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.953087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.953306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.953360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.953578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.953623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.953806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.953870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.954156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.954200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.954446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.954513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.954789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.954856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.955169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.955233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.955462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.955526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.955781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.955844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.956079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.956142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.956381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.956449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.956675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.956739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.956993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.957057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.957268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.957331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.957502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.957566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.957755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.957816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.957977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.958048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.958236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.958281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.958463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.958529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.958837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.958905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.959138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.959182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.959411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.959456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.959685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.959748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.959993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.960057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.960344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.960391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.960561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.960623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.960814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.960883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.961117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.961162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.961461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.961526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.594 qpair failed and we were unable to recover it. 00:50:19.594 [2024-07-23 09:03:31.961816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.594 [2024-07-23 09:03:31.961887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.962033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.962076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.962250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.962293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.962503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.962572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.962865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.962938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.963114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.963158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.963399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.963468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.963726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.963771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.963986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.964030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.964249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.964296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.964497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.964571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.964868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.964929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.965157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.965201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.965422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.965487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.965717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.965780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.966050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.966095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.966362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.966413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.966656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.966720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.966960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.967024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.967281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.967337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.967558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.967629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.967959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.968023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.968204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.968248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.968434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.968480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.968708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.968770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.969041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.969102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.969334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.969379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.969528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.969573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.969807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.969871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.970092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.970155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.595 qpair failed and we were unable to recover it. 00:50:19.595 [2024-07-23 09:03:31.970371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.595 [2024-07-23 09:03:31.970440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.970744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.970814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.970994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.971058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.971330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.971377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.971550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.971628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.971821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.971886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.972144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.972207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.972435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.972506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.972766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.972828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.973069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.973132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.973361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.973406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.973609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.973685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.973894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.973957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.974182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.974227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.974456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.974520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.974786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.974848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.975142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.975208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.975418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.975480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.975777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.975849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.976145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.976212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.976425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.976488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.976764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.976828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.977070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.977133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.977401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.977465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.977752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.977817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.978052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.978100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.978322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.978367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.978549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.978613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.978809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.978873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.979164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.979226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.979449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.979514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.979714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.979778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.980011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.980075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.980290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.980345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.980516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.980583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.980786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.596 [2024-07-23 09:03:31.980849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.596 qpair failed and we were unable to recover it. 00:50:19.596 [2024-07-23 09:03:31.981040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.981104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.981318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.981364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.981523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.981597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.981840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.981884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.982111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.982172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.982399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.982462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.982655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.982719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.983005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.983071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.983317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.983361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.983532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.983600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.983795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.983858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.984046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.984111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.984363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.984409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.984555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.984599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.984770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.984833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.985007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.985062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.985328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.985375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.985535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.985603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.985823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.985888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.986073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.986134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.986342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.986387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.986552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.986619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.986910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.986985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.987230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.987277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.987503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.987568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.987856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.987920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.988132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.988176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.988414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.988477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.988764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.988830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.989059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.989123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.989377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.989448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.989634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.989696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.989883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.989958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.990143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.990187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.990387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.990450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.990675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.990719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.597 qpair failed and we were unable to recover it. 00:50:19.597 [2024-07-23 09:03:31.990988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.597 [2024-07-23 09:03:31.991031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.991267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.991319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.991472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.991549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.991801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.991862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.992100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.992144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.992397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.992462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.992655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.992719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.992941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.993003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.993210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.993255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.993465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.993530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.993740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.993784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.993985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.994029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.994293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.994357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.994534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.994598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.994817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.994861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.995098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.995160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.995379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.995451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.995648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.995712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.995900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.995974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.996257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.996301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.996469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.996514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.996699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.996744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.996918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.996962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.997168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.997212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.997374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.997446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.997656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.997702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.997955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.998019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.998292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.998346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.998539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.998612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.998891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.998955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.999201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.999246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.999457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.999521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.999690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:31.999752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:31.999949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:32.000013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:32.000186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:32.000230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:32.000440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.598 [2024-07-23 09:03:32.000503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.598 qpair failed and we were unable to recover it. 00:50:19.598 [2024-07-23 09:03:32.000778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.000823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.001063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.001128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.001316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.001370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.001524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.001568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.001816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.001860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.002062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.002107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.002398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.002470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.002713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.002777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.003026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.003087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.003306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.003377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.003549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.003612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.003884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.003945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.004097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.004168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.004385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.004454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.004640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.004706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.004956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.005019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.005213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.005257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.005439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.005503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.005694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.005757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.005955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.006000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.006283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.006338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.006542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.006612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.006879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.006924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.007130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.007175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.007459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.007519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.007680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.007725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.007928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.007990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.008165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.008209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.008395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.008460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.008655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.008722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.009006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.009050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.009284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.009337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.009567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.009629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.009926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.009990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.010250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.010294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.010534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.010600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.599 qpair failed and we were unable to recover it. 00:50:19.599 [2024-07-23 09:03:32.010823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.599 [2024-07-23 09:03:32.010886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.011078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.011141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.011380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.011453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.011656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.011720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.011915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.011980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.012187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.012231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.012435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.012499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.012704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.012748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.012996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.013041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.013333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.013390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.013560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.013639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.013875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.013919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.014113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.014176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.014441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.014488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.014756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.014816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.015025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.015088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.015296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.015349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.015547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.015610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.015836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.015898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.016155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.016226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.016442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.016506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.016702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.016764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.016962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.017025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.017171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.017215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.017395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.017441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.017705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.017771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.018030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.018091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.018320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.018370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.018557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.018622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.018898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.018961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.019201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.600 [2024-07-23 09:03:32.019256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.600 qpair failed and we were unable to recover it. 00:50:19.600 [2024-07-23 09:03:32.019447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.019509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.019804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.019875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.020158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.020219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.020423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.020487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.020784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.020828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.021120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.021193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.021454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.021517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.021770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.021833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.022103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.022167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.022382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.022454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.022701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.022764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.023036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.023099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.023334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.023385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.023561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.023625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.023938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.023984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.024250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.024300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.024488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.024552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.024852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.024931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.025200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.025245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.025439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.025491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.025722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.025784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.026036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.026113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.027418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.027470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.027787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.027850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.028137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.028205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.028470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.028533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.028786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.028851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.029137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.029207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.029424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.029489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.029792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.029863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.030145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.030207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.030428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.030492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.030741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.030805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.031107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.031182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.031423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.031488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.031807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.601 [2024-07-23 09:03:32.031852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.601 qpair failed and we were unable to recover it. 00:50:19.601 [2024-07-23 09:03:32.032139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.032207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.032434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.032500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.032793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.032858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.033141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.033208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.033421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.033488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.033681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.033745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.034011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.034074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.034361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.034417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.034602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.034667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.034933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.034997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.035273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.035327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.035585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.035651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.035901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.035964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.036220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.036266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.036506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.036551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.036770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.036834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.037043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.037106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.037389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.037434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.037721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.037785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.037977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.038040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.038223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.038267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.038493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.038555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.038794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.038856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.039138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.039205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.039496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.039562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.039827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.039872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.040203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.040251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.040522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.040584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.040842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.040909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.041200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.041273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.041546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.041592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.041930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.042002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.042236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.042281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.042511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.042576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.042901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.042947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.043181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.602 [2024-07-23 09:03:32.043225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.602 qpair failed and we were unable to recover it. 00:50:19.602 [2024-07-23 09:03:32.043442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.043488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.043757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.043802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.044025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.044090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.044338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.044383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.044623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.044686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.044964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.045025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.045305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.045357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.045621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.045666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.045923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.045988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.046232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.046277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.046467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.046512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.046768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.046834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.047112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.047174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.047381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.047441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.047747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.047811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.048065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.048109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.048391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.048437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.048731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.048799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.049101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.049165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.049432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.049477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.049731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.049792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.050068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.050131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.050354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.050401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.050637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.050701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.050961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.051006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.051275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.051330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.051587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.051650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.051931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.052004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.052268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.052323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.052564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.052609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.052902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.052964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.053170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.603 [2024-07-23 09:03:32.053215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.603 qpair failed and we were unable to recover it. 00:50:19.603 [2024-07-23 09:03:32.053399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.053445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.053692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.053756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.054024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.054088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.054381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.054670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.054733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.054934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.054996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.055276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.055337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.055512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.055590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.055884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.055954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.056236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.056280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.056454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.056499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.056781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.056845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.057122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.057192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.057414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.057459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.057702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.057766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.057992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.058055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.058328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.058380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.058551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.058615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.058908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.058971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.059240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.059284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.059483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.059528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.059830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.059900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.060201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.060247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.060441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.060486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.060782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.060845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.061137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.061210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.061435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.061481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.061765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.061828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.062135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.062180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.062428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.062492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.062754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.062816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.063086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.063149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.604 qpair failed and we were unable to recover it. 00:50:19.604 [2024-07-23 09:03:32.063424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.604 [2024-07-23 09:03:32.063487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.063737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.063799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.064074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.064139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.064449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.064501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.064798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.064878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.065158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.065204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.065474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.065538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.065801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.065864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.066154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.066230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.066523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.066591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.066856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.066921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.067148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.067193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.067502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.067549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.067774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.067854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.068176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.068221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.068501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.068564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.068865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.068931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.069213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.069258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.069488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.069550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.069838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.069902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.070162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.070225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.070470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.070533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.070802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.070870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.071100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.071165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.071403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.071466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.071754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.071828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.072115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.072179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.072434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.072497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.072695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.072759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.073055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.073124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.605 qpair failed and we were unable to recover it. 00:50:19.605 [2024-07-23 09:03:32.073408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.605 [2024-07-23 09:03:32.073472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.073731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.073776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.074013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.074074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.074375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.074421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.074685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.074750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.075018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.075080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.075322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.075367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.075510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.075555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.075812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.075876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.076131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.076192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.076418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.076465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.076734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.076797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.077061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.077128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.077353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.077433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.077690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.077755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.078039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.078104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.078367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.078413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.078593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.078656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.078904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.078967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.079167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.079211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.079397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.079443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.079644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.079707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.079908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.079971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.080202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.080247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.080471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.080517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.080794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.080857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.081100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.081163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.081423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.081486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.081726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.081789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.606 qpair failed and we were unable to recover it. 00:50:19.606 [2024-07-23 09:03:32.081998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.606 [2024-07-23 09:03:32.082060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.082330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.082375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.082601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.082666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.082927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.082994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.083238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.083283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.083479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.083544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.083850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.083923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.084205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.084270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.084465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.084531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.084748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.084811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.085097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.085170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.085432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.085506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.085830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.085876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.086114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.086176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.086406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.086474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.086707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.086773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.087036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.087100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.087400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.087468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.087733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.087796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.088054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.088098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.607 [2024-07-23 09:03:32.088326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.607 [2024-07-23 09:03:32.088379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.607 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.088561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.088624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.088856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.088919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.089048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:19.879 [2024-07-23 09:03:32.089109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:19.879 [2024-07-23 09:03:32.089144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:19.879 [2024-07-23 09:03:32.089153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.089175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:19.879 [2024-07-23 09:03:32.089197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 [2024-07-23 09:03:32.089207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.089415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.089479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 [2024-07-23 09:03:32.089451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.089531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:50:19.879 [2024-07-23 09:03:32.089623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:50:19.879 [2024-07-23 09:03:32.089644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:50:19.879 [2024-07-23 09:03:32.089755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.089802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.090050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.090115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.090417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.090464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.090708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.090756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.091022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.091088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.091390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.091436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.091658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.091707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.091999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.092061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.092301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.092357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.092558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.092630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.092913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.092958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.093166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.093212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.093394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.093440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.093645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.093710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.093955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.094019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.094195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.094240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.094419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.094484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.094706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.094771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.094940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.095004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.095273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.095333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.095539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.095604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.879 qpair failed and we were unable to recover it. 00:50:19.879 [2024-07-23 09:03:32.095823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.879 [2024-07-23 09:03:32.095886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.096115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.096164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.096400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.096466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.096723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.096792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.097037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.097101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.097377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.097423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.097694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.097758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.098039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.098110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.098324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.098377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.098558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.098602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.098792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.098856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.099037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.099102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.099275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.099330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.099537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.099601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.099882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.099927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.100137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.100189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.100405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.100450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.100610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.100654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.100959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.101005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.101214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.101259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.101476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.101543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.101791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.101856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.102106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.102171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.102410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.102477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.102697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.102760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.102989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.103052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.103238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.103283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.103485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.103547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.103769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.103833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.104107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.104177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.104428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.104492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.104708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.104772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.105030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.105075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.105254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.105298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.105614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.105659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.105947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.880 [2024-07-23 09:03:32.106011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.880 qpair failed and we were unable to recover it. 00:50:19.880 [2024-07-23 09:03:32.106280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.106350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.106523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.106588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.106806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.106870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.107163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.107227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.107482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.107547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.107769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.107833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.108125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.108192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.108406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.108472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.108755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.108823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.109072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.109137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.109395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.109460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.109658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.109721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.109944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.110009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.110183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.110228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.110385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.110432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.110616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.110677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.110873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.110919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.111193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.111238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.111443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.111489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.111750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.111800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.112055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.112122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.112378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.112442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.112640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.112704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.112866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.112932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.113179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.113231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.113460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.113522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.113711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.113774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.114036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.114101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.114395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.114464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.114668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.114732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.114993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.115058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.115284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.115371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.115548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.115623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.115885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.115950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.116144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.116189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.116414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.881 [2024-07-23 09:03:32.116479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.881 qpair failed and we were unable to recover it. 00:50:19.881 [2024-07-23 09:03:32.116768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.116833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.117065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.117134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.117414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.117479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.117758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.117822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.118060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.118125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.118386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.118452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.118729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.118793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.119061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.119126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.119396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.119460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.119671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.119736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.120026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.120092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.120373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.120418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.120624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.120669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.120945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.121007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.121267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.121320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.121530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.121595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.121812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.121877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.122122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.122193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.122416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.122478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.122781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.122858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.123116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.123182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.123413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.123476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.123676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.123740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.123963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.124034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.124239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.124285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.124462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.124527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.124719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.124794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.125082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.125127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.125402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.125464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.125620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.125668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.125942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.125986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.126243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.126287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.126533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.126597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.126850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.126912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.127186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.127230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.127442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.882 [2024-07-23 09:03:32.127506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.882 qpair failed and we were unable to recover it. 00:50:19.882 [2024-07-23 09:03:32.127783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.127846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.128127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.128190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.128414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.128479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.128796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.128840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.129091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.129153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.129386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.129458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.129689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.129763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.130055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.130119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.130390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.130435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.130677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.130739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.131032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.131094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.131282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.131333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.131504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.131573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.131865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.131930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.132188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.132260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.132445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.132510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.132808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.132876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.133163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.133225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.133465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.133529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.133782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.133847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.134128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.134195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.134459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.134531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.134824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.134894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.135081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.135143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.135431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.135495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.135806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.135869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.136158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.136222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.136425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.136497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.136802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.136865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.137081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.137136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.137407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.137473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.137730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.137795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.883 qpair failed and we were unable to recover it. 00:50:19.883 [2024-07-23 09:03:32.138069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.883 [2024-07-23 09:03:32.138135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.138426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.138491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.138762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.138828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.139077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.139121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.139408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.139485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.139748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.139813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.140069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.140135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.140426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.140490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.140755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.140802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.141058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.141103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.141250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.141295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.141525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.141604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.141843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.141906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.142189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.142235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.142532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.142602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.142882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.142945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.143189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.143245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.143447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.143512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.143783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.143849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.144144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.144218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.144483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.144546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.144802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.144879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.145167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.145212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.145468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.145533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.145792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.145869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.146129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.146194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.146463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.146526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.146821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.146891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.147200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.147248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.147543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.147611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.147931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.147976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.148193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.148238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.148500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.148568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.884 [2024-07-23 09:03:32.148816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.884 [2024-07-23 09:03:32.148880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.884 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.149064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.149129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.149442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.149517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.149810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.149878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.150136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.150181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.150438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.150512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.150807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.150875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.151122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.151167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.151383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.151452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.151727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.151792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.152096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.152171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.152433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.152499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.152809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.152871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.153143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.153187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.153501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.153571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.153856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.153919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.154155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.154202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.154518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.154587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.154873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.154941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.155211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.155256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.155521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.155586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.155887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.156204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.156248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.156472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.156538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.156837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.156900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.157191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.157261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.157566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.157637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.157902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.157968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.158234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.158280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.158520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.158566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.158871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.158917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.159215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.159290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.159573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.159642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.159897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.159962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.160244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.160290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.160487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.160542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.160860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.885 [2024-07-23 09:03:32.160923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.885 qpair failed and we were unable to recover it. 00:50:19.885 [2024-07-23 09:03:32.161165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.161233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.161548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.161624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.161913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.161989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.162271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.162329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.162558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.162603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.162863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.162933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.163220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.163285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.163590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.163635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.163859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.163922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.164207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.164275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.164563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.164608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.164838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.164883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.165161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.165225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.165517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.165585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.165867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.165930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.166139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.166184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.166451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.166517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.166822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.166898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.167175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.167250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.167530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.167598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.167853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.167916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.168169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.168213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.168505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.168577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.168855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.168920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.169193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.169237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.169560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.169616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.169947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.169992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.170262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.170322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.170635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.170702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.170947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.171009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.171235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.171280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.171559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.171639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.171892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.171957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.172211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.172274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.172508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.172573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.172848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.172894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.886 qpair failed and we were unable to recover it. 00:50:19.886 [2024-07-23 09:03:32.173190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.886 [2024-07-23 09:03:32.173255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.173554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.173628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.173924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.173998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.174238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.174283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.174528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.174578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.174834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.174897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.175168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.175232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.175482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.175539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.175781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.175845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.176099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.176167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.176396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.176469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.176777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.176841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.177086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.177158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.177403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.177465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.177702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.177767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.178041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.178107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.178381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.178426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.178640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.178706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.178946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.179008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.179270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.179323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.179553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.179616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.179908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.179955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.180212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.180257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.180515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.180560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.180799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.180862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.181145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.181192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.181455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.181520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.181748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.181810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.182096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.182160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.182440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.182511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.182754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.182817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.183105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.183169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.183449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.183515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.183794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.183841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.184128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.184193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.184486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.887 [2024-07-23 09:03:32.184533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.887 qpair failed and we were unable to recover it. 00:50:19.887 [2024-07-23 09:03:32.184759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.184821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.185124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.185196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.185487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.185832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.185878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.186166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.186230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.186488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.186551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.186810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.186873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.187140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.187201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.187484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.187551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.187878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.187952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.188215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.188275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.188554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.188624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.188939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.188988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.189227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.189278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.189556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.189624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.189945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.189990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.190268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.190329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.190612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.190676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.190949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.191011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.191290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.191347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.191628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.191695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.191946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.192010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.192241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.192285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.192505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.192551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.192845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.192912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.193147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.193211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.193447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.193494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.193809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.193872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.194156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.194222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.194456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.194502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.194754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.194817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.888 qpair failed and we were unable to recover it. 00:50:19.888 [2024-07-23 09:03:32.195091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.888 [2024-07-23 09:03:32.195157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.195446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.195510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.195729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.195794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.196073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.196135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.196390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.196460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.196749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.196815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.197056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.197120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.197322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.197367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.197617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.197680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.197928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.197993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.198258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.198303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.198546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.198591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.198896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.198967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.199194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.199240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.199500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.199546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.199843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.199919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.200186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.200251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.200510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.200574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.200854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.200924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.201245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.201291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.201587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.201632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.201931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.202007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.202273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.202335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.202581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.202653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.202904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.202966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.203236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.203281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.203592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.203658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.203910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.203978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.204211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.204256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.204522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.204593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.204809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.204874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.205075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.205139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.205421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.205485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.205752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.205803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.206053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.206118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.206373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.206418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.889 [2024-07-23 09:03:32.206731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.889 [2024-07-23 09:03:32.206798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.889 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.207050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.207115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.207365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.207411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.207696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.207763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.208018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.208085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.208332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.208379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.208623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.208668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.208909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.208989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.209212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.209257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.209555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.209613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.209913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.209982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.210258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.210304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.210565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.210610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.210844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.210909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.211194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.211259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.211535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.211580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.211834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.211898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.212196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.212274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.212571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.212657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.212900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.212963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.213241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.213286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.213487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.213532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.213824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.213888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.214158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.214204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.214406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.214461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.214760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.214834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.215129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.215204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.215488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.215558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.215849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.215912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.216213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.216284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.216551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.216623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.216887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.216934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.217189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.217234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.217550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.217620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.217904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.217974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.218210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.218256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.890 [2024-07-23 09:03:32.218518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.890 [2024-07-23 09:03:32.218583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.890 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.218873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.218938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.219221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.219288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.219582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.219650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.219939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.220002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.220229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.220274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.220566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.220636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.220944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.220994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.221252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.221297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.221559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.221623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.221923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.221989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.222277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.222331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.222655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.222712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.223007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.223079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.223371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.223445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.223695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.223759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.224050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.224112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.224394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.224439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.224686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.224749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.225012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.225077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.225354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.225401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.225638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.225692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.225986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.226060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.226284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.226336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.226617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.226661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.226961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.227033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.227321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.227377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.227661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.227705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.227967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.228030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.228297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.228352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.228591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.228642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.228945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.229015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.229303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.229378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.229655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.229704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.230000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.230076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.230321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.230382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.230668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.891 [2024-07-23 09:03:32.230714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.891 qpair failed and we were unable to recover it. 00:50:19.891 [2024-07-23 09:03:32.230977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.231039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.231326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.231372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.231608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.231653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.231945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.232009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.232298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.232385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.232667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.232711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.232958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.233033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.233329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.233376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.233644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.233690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.233971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.234040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.234294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.234351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.234646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.234693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.234985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.235052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.235236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.235281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.235564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.235608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.235881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.235945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.236248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.236326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.236602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.236647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.236960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.237006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.237274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.237338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.237631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.237681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.237913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.237975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.238254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.238333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.238657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.238703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.238996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.239071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.239303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.239359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.239622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.239666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.240006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.240052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.240288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.240342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.240616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.240661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.240957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.241020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.241298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.241353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.241567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.241612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.241817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.241894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.242184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.242258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.242524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.242571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.892 [2024-07-23 09:03:32.242851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.892 [2024-07-23 09:03:32.242923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.892 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.243176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.243238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.243520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.243565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.243832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.243896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.244137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.244200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.244491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.244569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.244839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.244889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.245133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.245196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.245445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.245509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.245754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.245816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.246040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.246111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.246440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.246487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.246779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.246854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.247129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.247193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.247490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.247566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.247870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.247933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.248208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.248262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.248561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.248640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.248865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.248928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.249173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.249218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.249449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.249523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.249823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.249895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.250175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.250243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.250510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.250575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.250844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.250914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.251193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.251238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.251506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.251568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.251830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.251895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.252187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.252265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.252572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.252645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.252905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.252970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.253207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.893 [2024-07-23 09:03:32.253253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.893 qpair failed and we were unable to recover it. 00:50:19.893 [2024-07-23 09:03:32.253539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.253609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.253909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.253976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.254244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.254288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.254633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.254679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.254964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.255033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.255295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.255351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.255575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.255641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.255929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.255992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.256254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.256299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.256606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.256673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.256893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.256970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.257246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.257292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.257468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.257513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.257747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.257811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.258101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.258178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.258467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.258513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.258800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.258863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.259072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.259137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.259383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.259455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.259770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.259835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.260127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.260202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.260463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.260529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.260818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.260893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.261129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.261176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.261425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.261489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.261737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.261799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.262047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.262110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.262369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.262416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.262664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.262729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.262987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.263050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.263290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.263346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.263581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.263647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.263902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.263971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.264219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.264264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.264528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.264593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.264858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.894 [2024-07-23 09:03:32.264925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.894 qpair failed and we were unable to recover it. 00:50:19.894 [2024-07-23 09:03:32.265222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.265269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.265563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.265640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.265877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.265941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.266225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.266272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.266552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.266622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.266874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.266939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.267180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.267245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.267475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.267539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.267793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.267858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.268150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.268219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.268491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.268556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.268828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.268893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.269167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.269238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.269500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.269565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.269850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.269914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.270181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.270227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.270483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.270547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.270832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.270898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.271156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.271219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.271495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.271560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.271805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.271870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.272082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.272147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.272397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.272522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.272799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.272846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.273090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.273154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.273434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.273481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.273771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.273841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.274089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.274136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.274382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.274455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.274745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.274815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.275081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.275145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.275435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.275498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.275782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.275853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.276121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.276166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.276424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.276488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.276752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.895 [2024-07-23 09:03:32.276816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.895 qpair failed and we were unable to recover it. 00:50:19.895 [2024-07-23 09:03:32.277054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.277124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.277379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.277425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.277693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.277755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.278024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.278099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.278390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.278436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.278726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.278801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.279066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.279129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.279421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.279485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.279727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.279792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.280043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.280108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.280397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.280443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.280703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.280769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.281035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.281097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.281372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.281418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.281679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.281744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.282027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.282073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.282364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.282411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.282706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.282782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.283070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.283136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.283413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.283459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.283693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.283758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.284015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.284077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.284409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.284455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.284731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.284797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.285101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.285168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.285437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.285483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.285731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.285796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.286094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.286158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.286466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.286514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.286762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.286834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.287103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.287168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.287426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.287489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.287777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.287845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.288092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.288157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.288419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.288483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.288746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.288809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.896 qpair failed and we were unable to recover it. 00:50:19.896 [2024-07-23 09:03:32.289084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.896 [2024-07-23 09:03:32.289132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.289440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.289516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.289765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.289837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.290115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.290160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.290331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.290382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.290627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.290690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.290974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.291040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.291290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.291346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.291607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.291672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.291954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.292017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.292283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.292340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.292569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.292615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.292882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.292947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.293250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.293296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.293536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.293596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.293901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.293974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.294248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.294293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.294551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.294598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.294863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.294927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.295219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.295283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.295518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.295562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.295819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.295895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.296157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.296220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.296477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.296523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.296737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.296801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.297051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.297096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.297307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.297359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.297578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.297643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.297865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.297931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.298231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.298322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.298568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.298637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.298932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.298995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.299263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.299307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.299552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.299610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.299909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.299973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.300219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.300265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.897 [2024-07-23 09:03:32.300460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.897 [2024-07-23 09:03:32.300505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.897 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.300781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.300851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.301129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.301175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.301464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.301532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.301791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.301855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.302140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.302207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.302464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.302527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.302810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.302872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.303115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.303187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.303450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.303516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.303813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.303886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.304169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.304214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.304470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.304533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.304824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.304886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.305171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.305239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.305550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.305615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.305828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.305892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.306156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.306218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.306485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.306555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.306843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.306908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.307201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.307268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.307570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.307654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.307924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.307971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.308208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.308254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.308484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.308549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.308838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.308909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.309153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.309217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.309463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.309527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.309807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.309875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.310122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.310188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.310488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.310559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.898 [2024-07-23 09:03:32.310862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.898 [2024-07-23 09:03:32.310928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.898 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.311174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.311218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.311488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.311552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.311850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.311915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.312156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.312201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.312454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.312517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.312742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.312804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.313065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.313127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.313421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.313483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.313690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.313754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.314063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.314136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.314397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.314476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.314732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.314777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.315056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.315119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.315418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.315486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.315724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.315774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.316069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.316132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.316439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.316521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.316812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.316893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.317166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.317212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.317523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.317570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.317861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.317932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.318160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.318205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.318449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.318512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.318763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.318826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.319121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.319194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.319470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.319534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.319813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.319878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.320176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.320221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.320509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.320573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.320803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.320867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.321103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.321170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.321464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.321528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.321787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.321850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.322117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.322182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.322482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.322550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.899 [2024-07-23 09:03:32.322802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.899 [2024-07-23 09:03:32.322864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.899 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.323149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.323218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.323515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.323585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.323876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.323939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.324213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.324260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.324528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.324593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.324854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.324918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.325159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.325222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.325548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.325602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.325888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.325958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.326140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.326184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.326436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.326503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.326792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.326858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.327114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.327179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.327415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.327478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.327766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.327840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.328112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.328178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.328466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.328531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.328781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.328844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.329137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.329200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.329442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.329505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.329787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.329857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.330139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.330210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.330497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.330567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.330846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.330907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.331140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.331185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.331458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.331523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.331762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.331827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.332096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.332161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.332388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.332460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.332727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.332795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.333084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.333148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.333385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.333458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.333757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.333820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.334128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.334175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.334463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.334528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.334816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.900 [2024-07-23 09:03:32.334890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.900 qpair failed and we were unable to recover it. 00:50:19.900 [2024-07-23 09:03:32.335140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.335185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.335443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.335512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.335801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.335880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.336151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.336197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.336484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.336553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.336801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.336865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.337154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.337228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.337481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.337546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.337797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.337861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.338101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.338162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.338429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.338476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:50:19.901 qpair failed and we were unable to recover it. 00:50:19.901 [2024-07-23 09:03:32.338994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:19.901 [2024-07-23 09:03:32.339132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:50:19.901 [2024-07-23 09:03:32.339202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(5) to be set 00:50:19.901 [2024-07-23 09:03:32.339294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:50:19.901 [2024-07-23 09:03:32.339393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:50:19.901 [2024-07-23 09:03:32.339428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:50:19.901 [2024-07-23 09:03:32.339461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:50:19.901 Unable to reset the controller. 00:50:20.535 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.536 09:03:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:20.536 Malloc0 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:20.536 [2024-07-23 09:03:33.037397] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.536 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:20.794 [2024-07-23 09:03:33.070134] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.794 09:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2553917 00:50:21.052 Controller properly reset. 00:50:25.241 Initializing NVMe Controllers 00:50:25.241 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:50:25.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:50:25.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:50:25.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:50:25.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:50:25.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:50:25.241 Initialization complete. Launching workers. 00:50:25.241 Starting thread on core 1 00:50:25.241 Starting thread on core 2 00:50:25.241 Starting thread on core 3 00:50:25.241 Starting thread on core 0 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:50:25.241 00:50:25.241 real 0m12.363s 00:50:25.241 user 0m36.241s 00:50:25.241 sys 0m7.916s 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:50:25.241 ************************************ 00:50:25.241 END TEST nvmf_target_disconnect_tc2 00:50:25.241 ************************************ 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:50:25.241 rmmod nvme_tcp 00:50:25.241 rmmod nvme_fabrics 00:50:25.241 rmmod nvme_keyring 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2554404 ']' 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2554404 00:50:25.241 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2554404 ']' 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2554404 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2554404 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2554404' 00:50:25.242 killing process with pid 2554404 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2554404 00:50:25.242 09:03:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2554404 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:28.540 09:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:30.459 09:03:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:50:30.459 00:50:30.459 real 0m21.980s 00:50:30.459 user 1m8.257s 00:50:30.459 sys 0m12.155s 00:50:30.459 09:03:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:50:30.459 09:03:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:50:30.460 ************************************ 00:50:30.460 END TEST nvmf_target_disconnect 00:50:30.460 ************************************ 00:50:30.719 09:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:50:30.719 09:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:50:30.719 00:50:30.719 real 9m55.410s 00:50:30.719 user 24m55.761s 00:50:30.719 sys 2m12.645s 00:50:30.719 09:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:50:30.719 09:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:30.719 ************************************ 00:50:30.719 END TEST nvmf_host 00:50:30.719 ************************************ 00:50:30.719 09:03:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:50:30.719 00:50:30.719 real 36m51.438s 00:50:30.719 user 96m55.539s 00:50:30.719 sys 8m4.781s 00:50:30.719 09:03:43 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:50:30.719 09:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:30.719 ************************************ 00:50:30.719 END TEST nvmf_tcp 00:50:30.719 ************************************ 00:50:30.719 09:03:43 -- common/autotest_common.sh@1142 -- # return 0 00:50:30.719 09:03:43 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:50:30.719 09:03:43 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:50:30.719 09:03:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:50:30.719 09:03:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:30.719 09:03:43 -- common/autotest_common.sh@10 -- # set +x 00:50:30.719 ************************************ 00:50:30.719 START TEST spdkcli_nvmf_tcp 00:50:30.719 ************************************ 00:50:30.719 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:50:30.719 * Looking for test storage... 00:50:30.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2555771 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2555771 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2555771 ']' 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:30.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:50:30.720 09:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:30.978 [2024-07-23 09:03:43.347488] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:50:30.978 [2024-07-23 09:03:43.347758] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555771 ] 00:50:31.237 EAL: No free 2048 kB hugepages reported on node 1 00:50:31.237 [2024-07-23 09:03:43.604216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:31.805 [2024-07-23 09:03:44.034684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:50:31.805 [2024-07-23 09:03:44.034691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:32.372 09:03:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:50:32.372 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:50:32.372 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:50:32.372 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:50:32.372 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:50:32.372 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:50:32.372 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:50:32.372 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:50:32.372 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:50:32.372 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:50:32.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:50:32.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:50:32.373 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:50:32.373 ' 00:50:35.658 [2024-07-23 09:03:48.063812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:37.032 [2024-07-23 09:03:49.335174] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:50:39.559 [2024-07-23 09:03:51.683173] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:50:41.459 [2024-07-23 09:03:53.710160] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:50:42.833 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:50:42.833 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:50:42.833 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:50:42.833 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:50:42.833 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:50:42.833 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:50:42.833 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:50:42.833 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:50:42.833 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:50:42.833 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:50:42.833 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:50:42.833 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:50:43.092 09:03:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:50:43.350 09:03:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:43.608 09:03:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:50:43.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:50:43.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:50:43.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:50:43.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:50:43.608 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:50:43.608 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:50:43.608 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:50:43.608 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:50:43.608 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:50:43.608 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:50:43.608 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:50:43.608 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:50:43.608 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:50:43.608 ' 00:50:50.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:50:50.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:50:50.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:50:50.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:50:50.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:50:50.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:50:50.165 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:50:50.165 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:50:50.165 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:50:50.165 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:50:50.165 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:50:50.165 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:50:50.165 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:50:50.165 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2555771 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2555771 ']' 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2555771 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2555771 00:50:50.165 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:50:50.166 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:50:50.166 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2555771' 00:50:50.166 killing process with pid 2555771 00:50:50.166 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2555771 00:50:50.166 09:04:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2555771 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2555771 ']' 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2555771 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2555771 ']' 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2555771 00:50:51.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2555771) - No such process 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2555771 is not found' 00:50:51.544 Process with pid 2555771 is not found 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:50:51.544 00:50:51.544 real 0m20.859s 00:50:51.544 user 0m43.196s 00:50:51.544 sys 0m1.497s 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:50:51.544 09:04:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:51.544 ************************************ 00:50:51.544 END TEST spdkcli_nvmf_tcp 00:50:51.544 ************************************ 00:50:51.544 09:04:03 -- common/autotest_common.sh@1142 -- # return 0 00:50:51.544 09:04:03 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:50:51.544 09:04:03 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:50:51.544 09:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:51.544 09:04:03 -- common/autotest_common.sh@10 -- # set +x 00:50:51.544 ************************************ 00:50:51.544 START TEST nvmf_identify_passthru 00:50:51.544 ************************************ 00:50:51.544 09:04:04 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:50:51.804 * Looking for test storage... 00:50:51.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:50:51.804 09:04:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:51.804 09:04:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:51.804 09:04:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:51.804 09:04:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:51.804 09:04:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.804 09:04:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.804 09:04:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.804 09:04:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:50:51.804 09:04:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:50:51.804 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:50:51.804 09:04:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:51.804 09:04:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:51.804 09:04:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:51.804 09:04:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:51.804 09:04:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.805 09:04:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.805 09:04:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.805 09:04:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:50:51.805 09:04:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:51.805 09:04:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:51.805 09:04:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:50:51.805 09:04:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:50:51.805 09:04:04 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:50:51.805 09:04:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:50:55.132 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:55.132 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:50:55.132 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:50:55.133 Found 0000:84:00.0 (0x8086 - 0x159b) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:50:55.133 Found 0000:84:00.1 (0x8086 - 0x159b) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:50:55.133 Found net devices under 0000:84:00.0: cvl_0_0 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:50:55.133 Found net devices under 0000:84:00.1: cvl_0_1 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:50:55.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:55.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:50:55.133 00:50:55.133 --- 10.0.0.2 ping statistics --- 00:50:55.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:55.133 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:50:55.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:55.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:50:55.133 00:50:55.133 --- 10.0.0.1 ping statistics --- 00:50:55.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:55.133 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:50:55.133 09:04:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:50:55.133 09:04:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:50:55.133 09:04:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:50:55.133 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:50:55.134 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:50:55.134 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:50:55.134 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:50:55.394 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:50:55.394 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:50:55.394 09:04:07 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:50:55.394 09:04:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:50:55.394 09:04:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:50:55.394 09:04:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:50:55.394 09:04:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:50:55.394 09:04:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:50:55.654 EAL: No free 2048 kB hugepages reported on node 1 00:50:57.831 Cancelling nested steps due to timeout 00:50:57.834 Sending interrupt signal to process 00:51:00.931 09:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:51:00.931 09:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:51:00.931 09:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:51:00.931 09:04:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:51:00.931 EAL: No free 2048 kB hugepages reported on node 1 00:51:03.470 Terminated 00:51:03.478 script returned exit code 143 00:51:03.482 [Pipeline] } 00:51:03.504 [Pipeline] // stage 00:51:03.512 [Pipeline] } 00:51:03.534 [Pipeline] // timeout 00:51:03.542 [Pipeline] } 00:51:03.548 Timeout has been exceeded 00:51:03.548 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 3830dc97-7dcf-414b-882f-f43826535fd9 00:51:03.548 Setting overall build result to ABORTED 00:51:03.567 [Pipeline] // catchError 00:51:03.572 [Pipeline] } 00:51:03.588 [Pipeline] // wrap 00:51:03.593 [Pipeline] } 00:51:03.607 [Pipeline] // catchError 00:51:03.617 [Pipeline] stage 00:51:03.618 [Pipeline] { (Epilogue) 00:51:03.631 [Pipeline] catchError 00:51:03.632 [Pipeline] { 00:51:03.644 [Pipeline] echo 00:51:03.645 Cleanup processes 00:51:03.649 [Pipeline] sh 00:51:03.935 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:03.935 2139899 sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:51:03.935 2139912 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:51:03.935 2139951 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:51:03.935 2139953 python3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py --server 00:51:03.935 2139980 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:51:03.935 2139981 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:51:03.935 2139983 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:51:03.935 2139986 sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:51:03.935 2140022 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:51:03.935 2558176 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:51:03.935 2558190 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:51:03.935 2558191 python3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py --server 00:51:03.935 2560357 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:51:03.935 2560358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r trtype:PCIe traddr:0000:82:00.0 -i 0 00:51:03.935 2560761 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715430 00:51:03.935 2560799 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:03.948 [Pipeline] sh 00:51:04.233 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:04.233 ++ grep -v 'sudo pgrep' 00:51:04.233 ++ awk '{print $1}' 00:51:04.233 + sudo kill -9 2139899 2139912 2139951 2139953 2139980 2139981 2139983 2139986 2140022 2558176 2558190 2558191 2560357 2560358 00:51:04.244 [Pipeline] sh 00:51:04.528 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:51:11.105 [Pipeline] sh 00:51:11.390 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:51:11.649 Artifacts sizes are good 00:51:11.666 [Pipeline] archiveArtifacts 00:51:11.679 Archiving artifacts 00:51:11.950 [Pipeline] sh 00:51:12.235 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:51:12.250 [Pipeline] cleanWs 00:51:12.260 [WS-CLEANUP] Deleting project workspace... 00:51:12.260 [WS-CLEANUP] Deferred wipeout is used... 00:51:12.267 [WS-CLEANUP] done 00:51:12.268 [Pipeline] } 00:51:12.289 [Pipeline] // catchError 00:51:12.300 [Pipeline] echo 00:51:12.302 Tests finished with errors. Please check the logs for more info. 00:51:12.306 [Pipeline] echo 00:51:12.308 Execution node will be rebooted. 00:51:12.324 [Pipeline] build 00:51:12.328 Scheduling project: reset-job 00:51:12.342 [Pipeline] sh 00:51:12.621 + logger -p user.info -t JENKINS-CI 00:51:12.630 [Pipeline] } 00:51:12.647 [Pipeline] // stage 00:51:12.654 [Pipeline] } 00:51:12.671 [Pipeline] // node 00:51:12.678 [Pipeline] End of Pipeline 00:51:12.719 Finished: ABORTED